bored-terabyte-19735
07/08/2020, 4:42 AMfamous-bear-66383
07/09/2020, 1:19 PMapiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
labels:
app: platform
spec:
serviceName: mysql
replicas: 1
selector:
matchLabels:
app: platform
template:
metadata:
labels:
app: platform
tier: mysql
annotations:
<http://sidecar.istio.io/inject|sidecar.istio.io/inject>: "false"
spec:
terminationGracePeriodSeconds: 30
containers:
- image: mysql:5
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: mysql-password.txt
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
I come to this representation so far:
const mysqlPVC = new kx.PersistentVolumeClaim("mysql-pvc", {
metadata: {
name: "mysql-pv-claim",
namespace: ns,
labels: {
app: "platform"
}
},
spec: {
accessModes: ["ReadWriteOnce"],
resources: {
requests: {
storage: "5Gi"
}
}
}
});
const pb = new kx.PodBuilder({
terminationGracePeriodSeconds: 30,
containers: [{
name: "mysql",
image: "mysql:5.7",
ports: {mysql: 3306},
// The PodBuilder automatically creates the corresponding volume and naming boilerplate.
volumeMounts: [mysqlPVC.mount("/var/lib/mysql")]
}]
});
const mysqlpd = new kx.StatefulSet("mysql", {
metadata: {
name: "mysql",
namespace: ns,
labels: {
app: "platform-database"
},
},
spec: pb.asStatefulSetSpec({replicas: 1})
});
My problem is how can I add annotation to the container spec. As you can see above there’s the following annotation `sidecar.istio.io/inject: "false"`which forbids istio from injecting a side car.
How can I add it using PodBuilder ?bumpy-motorcycle-53357
07/16/2020, 2:12 PMvar configMap = new ConfigMap("aws-auth", new ConfigMapArgs()
{
Metadata = new ObjectMetaArgs()
{
Namespace = "kube-system",
Name = "aws-auth"
},
Data = new InputMap<string>()
{
["mapRoles"] = workerNodeRoleArn.Apply(arn =>
new[] {
//recreate default aws node role map
new
{
groups = new[]
{
"system:bootstrappers",
"system:nodes"
},
rolearn = arn,
username = "system:node:{{EC2PrivateDNSName}}"
}
}.ToYaml()
)
}
});
The issue is that as is, Pulumi complains that the resource already exists. I don't want to import it (EKS Crosswalk doesn't appear to import it either) as I want this to work without manual intervention on brand new EKS clusters. How does Crosswalk do it, and how can I get Pulumi to take control of this ConfigMap without importing it?
At this point, it would be fine if I could just delete that ConfigMap and re-create it, but don't think Pulumi supports that either.bored-terabyte-19735
07/20/2020, 8:40 AMcalm-greece-42329
07/21/2020, 7:03 PMcalm-greece-42329
07/21/2020, 7:05 PMerror: resource default/nginx-znf7z56r was not successfully created by the Kubernetes API server : Could not create watcher for PersistentVolumeClaims objects associated with Deployment "nginx-znf7z56r": Get "<https://XXXXX:443/api/v1/namespaces/default/persistentvolumeclaims?watch=true>": unexpected EOF
calm-greece-42329
07/22/2020, 7:02 PMgorgeous-elephant-23271
07/22/2020, 9:38 PMprehistoric-account-60014
07/24/2020, 8:41 PMhelm dependency build
before running pulumi up
?future-angle-6788
07/27/2020, 4:39 PMexport class HelmOperator extends ComponentResource {
constructor(opts: any) {
super('jameda:ops:platform:HelmClusterOperator', 'helm-operator', {}, opts);
const namespace = 'kube-system';
new kubernetes.helm.v3.Chart(
'helm-cluster-operator',
{
chart: 'helm-operator',
namespace,
version: '1.1.0',
resourcePrefix: 'helm-cluster-operator',
values: {
createCRD: true,
helm: {
versions: 'v3',
},
},
fetchOpts: {
repo: '<https://charts.fluxcd.io>',
},
},
{ parent: this }
);
}
}
kubernetes:<http://apiextensions.k8s.io:CustomResourceDefinition|apiextensions.k8s.io:CustomResourceDefinition> (<http://helm-cluster-operator-helmreleases.helm.fluxcd.io|helm-cluster-operator-helmreleases.helm.fluxcd.io>): error: Duplicate resource URN 'urn:pulumi:prod::ops-eks::jameda:ops:Platform$jameda:ops:platform:Extensions$jameda:ops:platform:HelmClusterOperator$kubernetes:<http://helm.sh/v2:Chart$kubernetes:apiextensions.k8s.io/v1beta1:CustomResourceDefinition::helm-cluster-operator-helmreleases.helm.fluxcd.io';|helm.sh/v2:Chart$kubernetes:apiextensions.k8s.io/v1beta1:CustomResourceDefinition::helm-cluster-operator-helmreleases.helm.fluxcd.io';> try giving it a unique name
I don’t know why I get the error. There is no other resource created with that name.
Chaning names does also not take any effect. Also deleted the complete stack, so there are currently no resources at the moment.
Also it always shows v2
in the url, albeit it is helm v3.
Does anyone have an idea?cool-egg-852
07/29/2020, 9:52 PMkubernetes:<http://apiregistration.k8s.io:APIService|apiregistration.k8s.io:APIService> (<http://v1beta1.external.metrics.k8s.io|v1beta1.external.metrics.k8s.io>):
error: Duplicate resource URN 'urn:pulumi:staging::datadog::kubernetes:<http://helm.sh/v2:Chart$kubernetes:apiregistration.k8s.io/v1:APIService::v1beta1.external.metrics.k8s.io';|helm.sh/v2:Chart$kubernetes:apiregistration.k8s.io/v1:APIService::v1beta1.external.metrics.k8s.io';> try giving it a unique name
I think because even though it is 2 separate providers, pulumi isn’t creating the urns properly.able-crayon-21563
07/29/2020, 11:14 PMthis.k8sprovider = new k8s.Provider(`cluster`, {
kubeconfig: this.kubeconfig,
suppressDeprecationWarnings: true
}, {parent: this});
For some reason, the id changed, leading to a post-step error.kind-mechanic-53546
07/30/2020, 6:05 AMable-crayon-21563
07/30/2020, 5:26 PMerror: resource complete event returned an error: failed to verify snapshot: resource (K8s namespace) refers to unknown provider (K8s provider with previous id)
better-actor-92669
08/04/2020, 12:51 PMMaster version1.17.7-gke.15
GKE cluster
pulumi==2.7.1
pulumi-gcp==3.16.0
pulumi-kubernetes==2.4.2
pulumi-postgresql==2.3.0
pulumi-random==2.2.0
Previously, I was able to deploy everything without any issues, so I assume it is either Kubernetes API changes or pulumi's interaction with Kubernetes API.
Can someone please help me identify the issue?prehistoric-account-60014
08/04/2020, 3:25 PMpulumi destroy
hanging forever due to a PVC not being finalized because of a pod in another stack relying on it. While there are many ways for us to fix this issue, the simplest way would be to avoid Pulumi waiting for the PVC finalizers to finish. Based on this (https://github.com/pulumi/pulumi-kubernetes/pull/417) pull request and this (https://www.pulumi.com/blog/improving-kubernetes-management-with-pulumis-await-logic/) blog post it seems that the <http://pulumi.com/skipAwait|pulumi.com/skipAwait>
annotation is what we want. Since those are a year-old and things have changes fast in the Pulumi world and I couldn’t find skipAwait
when searching the docs, I wanted to ask if this was still the recommended way to do things?shy-tent-25663
08/04/2020, 5:53 PMConfigGroup
. It’s imperative that these CRDs are not deleted so that the underlying resources preserved and continue to function.
I recently renamed some directories and files in the repo, and now pulumi wants to delete each ConfigFile
associated with the`ConfigGroup`. The preview does not show deletion of the underlying CRDs, but as I understand it the ConfigFile
is the parent of these resources. Is this merely the deletion of the reference to the YAML file, or will the CRDs be affected?limited-knife-15571
08/04/2020, 11:38 PMDiagnostics:
pulumi:pulumi:Stack (Hyperwave.Infrastructure-dev):
error: Program failed with an unhandled exception:
error: Traceback (most recent call last):
File "/usr/bin/pulumi-language-python-exec", line 85, in <module>
loop.run_until_complete(coro)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/stack.py", line 83, in run_in_stack
await run_pulumi_func(lambda: Stack(func))
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/stack.py", line 51, in run_pulumi_func
await RPC_MANAGER.rpcs.pop()
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc_manager.py", line 67, in rpc_wrapper
result = await rpc
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/resource.py", line 474, in do_register_resource_outputs
serialized_props = await rpc.serialize_properties(outputs, {})
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc.py", line 68, in serialize_properties
result = await serialize_property(v, deps, input_transformer)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc.py", line 173, in serialize_property
value = await serialize_property(output.future(), deps, input_transformer)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc.py", line 159, in serialize_property
future_return = await asyncio.ensure_future(awaitable)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 112, in get_value
val = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 153, in run
value = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 337, in gather_futures
return await asyncio.gather(*value_futures)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 112, in get_value
val = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 153, in run
value = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 174, in run
transformed: Input[U] = func(value)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi_kubernetes/yaml.py", line 496, in <lambda>
CustomResourceDefinition(f"{x}", opts, **obj)))]
TypeError: __init__() got an unexpected keyword argument 'status'
error: an unhandled error occurred: Program exited with non-zero exit code: 1
Any idea what could be the error ? or how to investigate ?limited-knife-15571
08/04/2020, 11:38 PMimport pulumi
from pulumi_kubernetes.yaml import ConfigFile
class OperatorLifecycleManager(pulumi.ComponentResource):
def __init__(
self,
name: str,
opts: pulumi.ResourceOptions,
release: str = "0.15.1"
):
super().__init__("kubernetes:module:OperatorLifecycleManager", name, None, opts)
base_url = f"<https://github.com/operator-framework/operator-lifecycle-manager/releases/download/{release}>"
crds_url = f"{base_url}/crds.yaml"
olm_url = f"{base_url}/olm.yaml"
self.crds = ConfigFile(f"{name}-crds", crds_url, opts=pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(parent=self)))
self.olm = ConfigFile(f"{name}-olm", olm_url, opts=pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(parent=self, depends_on=[self.crds])))
The errors I'm getting isproud-spoon-58287
08/05/2020, 10:00 AMapiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
labels:
io.kompose.service: ksqldb-server
name: ksqldb-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: ksqldb-server
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
labels:
io.kompose.service: ksqldb-server
spec:
containers:
- env:
- name: KSQL_BOOTSTRAP_SERVERS
value: pkc-4r297.europe-west1.gcp.confluent.cloud:9092
- name: KSQL_KSQL_INTERNAL_TOPIC_REPLICAS
value: "3"
- name: KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE
value: "true"
- name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE
value: "true"
- name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR
value: "3"
- name: KSQL_KSQL_SINK_REPLICAS
value: "3"
- name: KSQL_KSQL_STREAMS_REPLICATION_FACTOR
value: "3"
- name: KSQL_LISTENERS
value: <http://0.0.0.0:8088>
- name: KSQL_SASL_JAAS_CONFIG
value: |
org.apache.kafka.common.security.plain.PlainLoginModule required username="USERNAME" password="PASSOWRD";
- name: KSQL_SASL_MECHANISM
value: PLAIN
- name: KSQL_SECURITY_PROTOCOL
value: SASL_SSL
image: confluentinc/ksqldb-server:0.10.1
imagePullPolicy: ""
name: ksqldb-server
ports:
- containerPort: 8088
resources: {}
hostname: ksqldb-server
restartPolicy: Always
serviceAccountName: ""
volumes: []
kind-mechanic-53546
08/06/2020, 8:13 AMconst promSetup = new k8s.yaml.ConfigGroup(
"promSetup",
{
files: [path.join("manifests/setup/", "*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider }
);
const promMain = new k8s.yaml.ConfigGroup(
"promMain",
{
files: [path.join("manifests/", "*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider, dependsOn: [promSetup] }
);
I get an error for 2 of the CustomResourceDefinitions
<http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com> (kubernetes:yaml:ConfigGroup$kubernetes:yaml:ConfigFile$kubernetes:<http://apiextensions.k8s.io/v1beta1:CustomResourceDefinition|apiextensions.k8s.io/v1beta1:CustomResourceDefinition>)
error: resource <http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com> was not successfully created by the Kubernetes API server : <http://customresourcedefinitions.apiextensions.k8s.io|customresourcedefinitions.apiextensions.k8s.io> "<http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com>" already exists
&&
<http://prometheuses.monitoring.coreos.com|prometheuses.monitoring.coreos.com>
and
<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> (kubernetes:yaml:ConfigGroup$kubernetes:yaml:ConfigFile$kubernetes:<http://apiregistration.k8s.io/v1:APIService|apiregistration.k8s.io/v1:APIService>)
error: resource <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> was not successfully created by the Kubernetes API server : <http://apiservices.apiregistration.k8s.io|apiservices.apiregistration.k8s.io> "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" already exists
Checking the cluster, they do exist, and they were created by the initial deployment.
Running up --refresh does not fix it either
Questions
1. Is this a bug?
2. How can I recover from this?
Normally I would import the resource but there is no import option for ConfigGroup
proud-spoon-58287
08/06/2020, 9:46 AMHi all, I have to update a secret, but `pulumi up`
does not show any changes. If I delete the secret using kubectl it does not get recreated the next time I do `pulumi up``
Only chance I had so far is to destroy the cluster and recreate it (which is bad). I see that destroy has the flag -t, but it seems that the resource name I am using is wrong. Is there a better way to work with this?
proud-spoon-58287
08/06/2020, 2:39 PMbright-policeman-55860
08/07/2020, 1:46 PMbright-policeman-55860
08/07/2020, 2:04 PMbright-policeman-55860
08/07/2020, 3:43 PMbright-policeman-55860
08/10/2020, 4:08 PMkubernetes.core.v1.Service.get("service", "kube-dns",
opts=pulumi.ResourceOptions(provider=kubernetes_provider))
This results in Preview failed: resource 'kube-dns' does not exist
But:
$ kubectl --kubeconfig /tmp/kubeconfig get svc -n kube-system kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 3d22h
And yes, my kubernetes_provider is using the kube-system
namespacebright-policeman-55860
08/10/2020, 4:21 PMkube-dns
myself, it comes with EKS. I really don't understand how data sources work with Kubernetes, do they even exist?microscopic-arm-19649
08/10/2020, 7:21 PMmicroscopic-arm-19649
08/11/2020, 12:10 PM