faint-motherboard-95438
06/23/2020, 2:32 PMgetResource()
on a k8s.helm.v2.Chart
yields a weird error:
Error: invocation of kubernetes:yaml:decode returned an error: error converting YAML to JSON: yaml: line 29: could not find expected ':'
I’m installing mongodb-replicaset
chart and try to access the StatefulSet
:
this.statefulSet = this.chart.getResource(
'apps/v1/StatefulSet',
replicaSetName,
)
const uri = `mongodb://${this.statefulSet.spec.serviceName}:27017`
Following snippet example from here : https://www.pulumi.com/docs/guides/adopting/from_kubernetes/#provisioning-a-helm-charticy-jordan-58549
06/25/2020, 4:19 PMPreview failed: resource operator/kafka-bootstrap-lb does not exist
.
export const kafkaService = k8s.core.v1.Service.get(
"kafkaLB",
"operator/kafka-bootstrap-lb"
);
const kafkaRecord = new azure.dns.ARecord(
"kafka",
{
name: "kafka",
zoneName: zone.name,
resourceGroupName: config.resourceGroup.name,
ttl: 60,
records: [kafkaService.status.loadBalancer.ingress[0].ip],
},
{
dependsOn: kafkaService,
}
);
Service operator/kafka-bootstrap-lb
isn’t presented in resources (helm), due to operator that deploys this later and looks like pulumi
doesn’t wait for that service.dazzling-sundown-39670
06/30/2020, 11:17 AMhelm install --name fission --namespace fission \ <https://github.com/fission/fission/releases/download/1.10.0/fission-all-1.10.0.tgz>
I figured fetchOpts.repo but I'm not sure what to put in ithundreds-portugal-17080
07/03/2020, 12:23 AMfilebeatConfig
config with backticks using following code and it doesn't work. I used debug in pulumi up command and it doesn't show much.
Errors: template: filebeat/templates/daemonset.yaml:29:27: executing "filebeat/templates/daemonset.yaml" at <include (print .Template.BasePath "/configmap.yaml") .>: error calling include: template: filebeat/templates/configmap.yaml:13:35: executing "filebeat/templates/configmap.yaml" at <.Values.filebeatConfig>: range can't iterate over filebeat.yml
Chart used: https://github.com/elastic/helm-charts/blob/master/filebeat/values.yaml
Pulumi specific code using the chart:
getFilebeatChart(elkChart: k8s.helm.v2.Chart, kibanaChart: k8s.helm.v2.Chart): k8s.helm.v2.Chart | undefined {
if (!this.enabled) {
return undefined;
}
const filebeatVersion = this.getConfig("filebeatVersion");
const fileBeatConfiguration = this.getConfig("fileBeatConfiguration");
return new k8s.helm.v2.Chart("filebeat", {
path: "../helm_packages_v1/elastic-helm-charts-7.7.0/filebeat",
transformations: [obj => {
if (obj.kind === "DeamonSet" ) {
obj.metadata.annotations= {"<http://pulumi.com/timeoutSeconds|pulumi.com/timeoutSeconds>": this.esTimeout}
}
}],
values: {
imageTag: this.esVersion,
filebeatConfig:
``filebeat.yml: |`
filebeat.inputs:
- type: docker
containers.ids:
- '*'
processors:
- add_kubernetes_metadata:
in_cluster: true
,
},
}, { dependsOn: [ elkChart, kibanaChart], providers: { "kubernetes": this.cluster.provider } ,customTimeouts: {`
create: "2m",
delete: "2m",
update: "2m",
}});
}
bored-river-53178
07/03/2020, 7:31 AMbitter-tiger-55434
07/03/2020, 1:31 PMpulumi refresh
to sync k8s cluster's status to local stack. But when I tried to apply the previous code by using pulumi up
, it tells me there is no updates. Is there anything I misunderstand about pulumi refresh
?nutritious-judge-27316
07/07/2020, 3:16 PMdazzling-sundown-39670
07/07/2020, 6:10 PMmost-spoon-17568
07/07/2020, 6:15 PMbored-terabyte-19735
07/08/2020, 4:42 AMfamous-bear-66383
07/09/2020, 1:19 PMapiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
labels:
app: platform
spec:
serviceName: mysql
replicas: 1
selector:
matchLabels:
app: platform
template:
metadata:
labels:
app: platform
tier: mysql
annotations:
<http://sidecar.istio.io/inject|sidecar.istio.io/inject>: "false"
spec:
terminationGracePeriodSeconds: 30
containers:
- image: mysql:5
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: mysql-password.txt
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
I come to this representation so far:
const mysqlPVC = new kx.PersistentVolumeClaim("mysql-pvc", {
metadata: {
name: "mysql-pv-claim",
namespace: ns,
labels: {
app: "platform"
}
},
spec: {
accessModes: ["ReadWriteOnce"],
resources: {
requests: {
storage: "5Gi"
}
}
}
});
const pb = new kx.PodBuilder({
terminationGracePeriodSeconds: 30,
containers: [{
name: "mysql",
image: "mysql:5.7",
ports: {mysql: 3306},
// The PodBuilder automatically creates the corresponding volume and naming boilerplate.
volumeMounts: [mysqlPVC.mount("/var/lib/mysql")]
}]
});
const mysqlpd = new kx.StatefulSet("mysql", {
metadata: {
name: "mysql",
namespace: ns,
labels: {
app: "platform-database"
},
},
spec: pb.asStatefulSetSpec({replicas: 1})
});
My problem is how can I add annotation to the container spec. As you can see above there’s the following annotation `sidecar.istio.io/inject: "false"`which forbids istio from injecting a side car.
How can I add it using PodBuilder ?bumpy-motorcycle-53357
07/16/2020, 2:12 PMvar configMap = new ConfigMap("aws-auth", new ConfigMapArgs()
{
Metadata = new ObjectMetaArgs()
{
Namespace = "kube-system",
Name = "aws-auth"
},
Data = new InputMap<string>()
{
["mapRoles"] = workerNodeRoleArn.Apply(arn =>
new[] {
//recreate default aws node role map
new
{
groups = new[]
{
"system:bootstrappers",
"system:nodes"
},
rolearn = arn,
username = "system:node:{{EC2PrivateDNSName}}"
}
}.ToYaml()
)
}
});
The issue is that as is, Pulumi complains that the resource already exists. I don't want to import it (EKS Crosswalk doesn't appear to import it either) as I want this to work without manual intervention on brand new EKS clusters. How does Crosswalk do it, and how can I get Pulumi to take control of this ConfigMap without importing it?
At this point, it would be fine if I could just delete that ConfigMap and re-create it, but don't think Pulumi supports that either.bored-terabyte-19735
07/20/2020, 8:40 AMcalm-greece-42329
07/21/2020, 7:03 PMcalm-greece-42329
07/21/2020, 7:05 PMerror: resource default/nginx-znf7z56r was not successfully created by the Kubernetes API server : Could not create watcher for PersistentVolumeClaims objects associated with Deployment "nginx-znf7z56r": Get "<https://XXXXX:443/api/v1/namespaces/default/persistentvolumeclaims?watch=true>": unexpected EOF
calm-greece-42329
07/22/2020, 7:02 PMgorgeous-elephant-23271
07/22/2020, 9:38 PMprehistoric-account-60014
07/24/2020, 8:41 PMhelm dependency build
before running pulumi up
?future-angle-6788
07/27/2020, 4:39 PMexport class HelmOperator extends ComponentResource {
constructor(opts: any) {
super('jameda:ops:platform:HelmClusterOperator', 'helm-operator', {}, opts);
const namespace = 'kube-system';
new kubernetes.helm.v3.Chart(
'helm-cluster-operator',
{
chart: 'helm-operator',
namespace,
version: '1.1.0',
resourcePrefix: 'helm-cluster-operator',
values: {
createCRD: true,
helm: {
versions: 'v3',
},
},
fetchOpts: {
repo: '<https://charts.fluxcd.io>',
},
},
{ parent: this }
);
}
}
kubernetes:<http://apiextensions.k8s.io:CustomResourceDefinition|apiextensions.k8s.io:CustomResourceDefinition> (<http://helm-cluster-operator-helmreleases.helm.fluxcd.io|helm-cluster-operator-helmreleases.helm.fluxcd.io>): error: Duplicate resource URN 'urn:pulumi:prod::ops-eks::jameda:ops:Platform$jameda:ops:platform:Extensions$jameda:ops:platform:HelmClusterOperator$kubernetes:<http://helm.sh/v2:Chart$kubernetes:apiextensions.k8s.io/v1beta1:CustomResourceDefinition::helm-cluster-operator-helmreleases.helm.fluxcd.io';|helm.sh/v2:Chart$kubernetes:apiextensions.k8s.io/v1beta1:CustomResourceDefinition::helm-cluster-operator-helmreleases.helm.fluxcd.io';> try giving it a unique name
I don’t know why I get the error. There is no other resource created with that name.
Chaning names does also not take any effect. Also deleted the complete stack, so there are currently no resources at the moment.
Also it always shows v2
in the url, albeit it is helm v3.
Does anyone have an idea?cool-egg-852
07/29/2020, 9:52 PMkubernetes:<http://apiregistration.k8s.io:APIService|apiregistration.k8s.io:APIService> (<http://v1beta1.external.metrics.k8s.io|v1beta1.external.metrics.k8s.io>):
error: Duplicate resource URN 'urn:pulumi:staging::datadog::kubernetes:<http://helm.sh/v2:Chart$kubernetes:apiregistration.k8s.io/v1:APIService::v1beta1.external.metrics.k8s.io';|helm.sh/v2:Chart$kubernetes:apiregistration.k8s.io/v1:APIService::v1beta1.external.metrics.k8s.io';> try giving it a unique name
I think because even though it is 2 separate providers, pulumi isn’t creating the urns properly.able-crayon-21563
07/29/2020, 11:14 PMthis.k8sprovider = new k8s.Provider(`cluster`, {
kubeconfig: this.kubeconfig,
suppressDeprecationWarnings: true
}, {parent: this});
For some reason, the id changed, leading to a post-step error.kind-mechanic-53546
07/30/2020, 6:05 AMable-crayon-21563
07/30/2020, 5:26 PMerror: resource complete event returned an error: failed to verify snapshot: resource (K8s namespace) refers to unknown provider (K8s provider with previous id)
better-actor-92669
08/04/2020, 12:51 PMMaster version1.17.7-gke.15
GKE cluster
pulumi==2.7.1
pulumi-gcp==3.16.0
pulumi-kubernetes==2.4.2
pulumi-postgresql==2.3.0
pulumi-random==2.2.0
Previously, I was able to deploy everything without any issues, so I assume it is either Kubernetes API changes or pulumi's interaction with Kubernetes API.
Can someone please help me identify the issue?prehistoric-account-60014
08/04/2020, 3:25 PMpulumi destroy
hanging forever due to a PVC not being finalized because of a pod in another stack relying on it. While there are many ways for us to fix this issue, the simplest way would be to avoid Pulumi waiting for the PVC finalizers to finish. Based on this (https://github.com/pulumi/pulumi-kubernetes/pull/417) pull request and this (https://www.pulumi.com/blog/improving-kubernetes-management-with-pulumis-await-logic/) blog post it seems that the <http://pulumi.com/skipAwait|pulumi.com/skipAwait>
annotation is what we want. Since those are a year-old and things have changes fast in the Pulumi world and I couldn’t find skipAwait
when searching the docs, I wanted to ask if this was still the recommended way to do things?shy-tent-25663
08/04/2020, 5:53 PMConfigGroup
. It’s imperative that these CRDs are not deleted so that the underlying resources preserved and continue to function.
I recently renamed some directories and files in the repo, and now pulumi wants to delete each ConfigFile
associated with the`ConfigGroup`. The preview does not show deletion of the underlying CRDs, but as I understand it the ConfigFile
is the parent of these resources. Is this merely the deletion of the reference to the YAML file, or will the CRDs be affected?limited-knife-15571
08/04/2020, 11:38 PMDiagnostics:
pulumi:pulumi:Stack (Hyperwave.Infrastructure-dev):
error: Program failed with an unhandled exception:
error: Traceback (most recent call last):
File "/usr/bin/pulumi-language-python-exec", line 85, in <module>
loop.run_until_complete(coro)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/stack.py", line 83, in run_in_stack
await run_pulumi_func(lambda: Stack(func))
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/stack.py", line 51, in run_pulumi_func
await RPC_MANAGER.rpcs.pop()
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc_manager.py", line 67, in rpc_wrapper
result = await rpc
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/resource.py", line 474, in do_register_resource_outputs
serialized_props = await rpc.serialize_properties(outputs, {})
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc.py", line 68, in serialize_properties
result = await serialize_property(v, deps, input_transformer)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc.py", line 173, in serialize_property
value = await serialize_property(output.future(), deps, input_transformer)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/runtime/rpc.py", line 159, in serialize_property
future_return = await asyncio.ensure_future(awaitable)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 112, in get_value
val = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 153, in run
value = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 337, in gather_futures
return await asyncio.gather(*value_futures)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 112, in get_value
val = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 153, in run
value = await self._future
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi/output.py", line 174, in run
transformed: Input[U] = func(value)
File "/home/dzucker/git/Hyperwave.Infrastructure/venv/lib/python3.8/site-packages/pulumi_kubernetes/yaml.py", line 496, in <lambda>
CustomResourceDefinition(f"{x}", opts, **obj)))]
TypeError: __init__() got an unexpected keyword argument 'status'
error: an unhandled error occurred: Program exited with non-zero exit code: 1
Any idea what could be the error ? or how to investigate ?limited-knife-15571
08/04/2020, 11:38 PMimport pulumi
from pulumi_kubernetes.yaml import ConfigFile
class OperatorLifecycleManager(pulumi.ComponentResource):
def __init__(
self,
name: str,
opts: pulumi.ResourceOptions,
release: str = "0.15.1"
):
super().__init__("kubernetes:module:OperatorLifecycleManager", name, None, opts)
base_url = f"<https://github.com/operator-framework/operator-lifecycle-manager/releases/download/{release}>"
crds_url = f"{base_url}/crds.yaml"
olm_url = f"{base_url}/olm.yaml"
self.crds = ConfigFile(f"{name}-crds", crds_url, opts=pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(parent=self)))
self.olm = ConfigFile(f"{name}-olm", olm_url, opts=pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(parent=self, depends_on=[self.crds])))
The errors I'm getting isproud-spoon-58287
08/05/2020, 10:00 AMapiVersion: apps/v1
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
labels:
io.kompose.service: ksqldb-server
name: ksqldb-server
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: ksqldb-server
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
labels:
io.kompose.service: ksqldb-server
spec:
containers:
- env:
- name: KSQL_BOOTSTRAP_SERVERS
value: pkc-4r297.europe-west1.gcp.confluent.cloud:9092
- name: KSQL_KSQL_INTERNAL_TOPIC_REPLICAS
value: "3"
- name: KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE
value: "true"
- name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE
value: "true"
- name: KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR
value: "3"
- name: KSQL_KSQL_SINK_REPLICAS
value: "3"
- name: KSQL_KSQL_STREAMS_REPLICATION_FACTOR
value: "3"
- name: KSQL_LISTENERS
value: <http://0.0.0.0:8088>
- name: KSQL_SASL_JAAS_CONFIG
value: |
org.apache.kafka.common.security.plain.PlainLoginModule required username="USERNAME" password="PASSOWRD";
- name: KSQL_SASL_MECHANISM
value: PLAIN
- name: KSQL_SECURITY_PROTOCOL
value: SASL_SSL
image: confluentinc/ksqldb-server:0.10.1
imagePullPolicy: ""
name: ksqldb-server
ports:
- containerPort: 8088
resources: {}
hostname: ksqldb-server
restartPolicy: Always
serviceAccountName: ""
volumes: []
kind-mechanic-53546
08/06/2020, 8:13 AMconst promSetup = new k8s.yaml.ConfigGroup(
"promSetup",
{
files: [path.join("manifests/setup/", "*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider }
);
const promMain = new k8s.yaml.ConfigGroup(
"promMain",
{
files: [path.join("manifests/", "*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider, dependsOn: [promSetup] }
);
I get an error for 2 of the CustomResourceDefinitions
<http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com> (kubernetes:yaml:ConfigGroup$kubernetes:yaml:ConfigFile$kubernetes:<http://apiextensions.k8s.io/v1beta1:CustomResourceDefinition|apiextensions.k8s.io/v1beta1:CustomResourceDefinition>)
error: resource <http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com> was not successfully created by the Kubernetes API server : <http://customresourcedefinitions.apiextensions.k8s.io|customresourcedefinitions.apiextensions.k8s.io> "<http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com>" already exists
&&
<http://prometheuses.monitoring.coreos.com|prometheuses.monitoring.coreos.com>
and
<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> (kubernetes:yaml:ConfigGroup$kubernetes:yaml:ConfigFile$kubernetes:<http://apiregistration.k8s.io/v1:APIService|apiregistration.k8s.io/v1:APIService>)
error: resource <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> was not successfully created by the Kubernetes API server : <http://apiservices.apiregistration.k8s.io|apiservices.apiregistration.k8s.io> "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" already exists
Checking the cluster, they do exist, and they were created by the initial deployment.
Running up --refresh does not fix it either
Questions
1. Is this a bug?
2. How can I recover from this?
Normally I would import the resource but there is no import option for ConfigGroup
kind-mechanic-53546
08/06/2020, 8:13 AMconst promSetup = new k8s.yaml.ConfigGroup(
"promSetup",
{
files: [path.join("manifests/setup/", "*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider }
);
const promMain = new k8s.yaml.ConfigGroup(
"promMain",
{
files: [path.join("manifests/", "*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider, dependsOn: [promSetup] }
);
I get an error for 2 of the CustomResourceDefinitions
<http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com> (kubernetes:yaml:ConfigGroup$kubernetes:yaml:ConfigFile$kubernetes:<http://apiextensions.k8s.io/v1beta1:CustomResourceDefinition|apiextensions.k8s.io/v1beta1:CustomResourceDefinition>)
error: resource <http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com> was not successfully created by the Kubernetes API server : <http://customresourcedefinitions.apiextensions.k8s.io|customresourcedefinitions.apiextensions.k8s.io> "<http://alertmanagers.monitoring.coreos.com|alertmanagers.monitoring.coreos.com>" already exists
&&
<http://prometheuses.monitoring.coreos.com|prometheuses.monitoring.coreos.com>
and
<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> (kubernetes:yaml:ConfigGroup$kubernetes:yaml:ConfigFile$kubernetes:<http://apiregistration.k8s.io/v1:APIService|apiregistration.k8s.io/v1:APIService>)
error: resource <http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io> was not successfully created by the Kubernetes API server : <http://apiservices.apiregistration.k8s.io|apiservices.apiregistration.k8s.io> "<http://v1beta1.metrics.k8s.io|v1beta1.metrics.k8s.io>" already exists
Checking the cluster, they do exist, and they were created by the initial deployment.
Running up --refresh does not fix it either
Questions
1. Is this a bug?
2. How can I recover from this?
Normally I would import the resource but there is no import option for ConfigGroup
const promSetup0 = new k8s.yaml.ConfigGroup(
"promSetup0",
{
files: [path.join("manifests/setup/", "prometheus-operator-0*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider }
);
const promSetup1 = new k8s.yaml.ConfigGroup(
"promSetup1",
{
files: [path.join("manifests/setup/", "prometheus-operator-[^0]*.yaml")],
},
{ provider: conf.k8sClusterConfig.provider, dependsOn: [promSetup0] }
);
const promMain = new k8s.yaml.ConfigGroup(
"promMain",
{
files: [path.join("manifests/", "*.yaml")],
},
{
provider: conf.k8sClusterConfig.provider,
dependsOn: [promSetup0, promSetup1],
}
);
I still get the errors, even with the dependsOn
attribute
I can get it to successfully deploy by commenting out promMain and promSetup1, then just promMain and doing up in between each step
Shouldn't dependsOn
wait for the dependent resource to fully create before starting?
Or is this a case of a delayed finish after reporting ok?