victorious-exabyte-70545
10/29/2021, 6:26 PMmodern-city-46733
10/30/2021, 3:31 PMkubectl
commands that just don’t convert to Pulumi yet? (for example Helm Chart deploys with hooks)colossal-car-2729
11/02/2021, 1:34 PMimage
field in containers with pulumi preview --diff
~ kubernetes:apps/v1:Deployment: (update)
[id=default/selfservice-s99yer46]
...
~ spec: {
~ template: {
~ spec: {
~ containers: [
~ [0]: {
}
[1]: <null>
~ [2]: {
}
]
}
}
}
how can pulumi show the actual image diff?average-market-57523
11/02/2021, 6:04 PMsalmon-raincoat-19475
11/02/2021, 7:02 PMOutput
class in Python - which seems to suggest that it needs to stay in the Output
type so that Pulumi objects are realized....
Is there a different approach I should be taking? Thanks!future-window-78560
11/02/2021, 7:07 PMquiet-leather-94755
11/05/2021, 12:57 PMaws-load-balancer-controller
, but it doesn't seem to pass values properly.. I'm trying to override image.repository
to use a repo in eu-west-1, but it's still using the default us-west-2 region (and failing to pull images).. Any ideas? Possibly a bug, or am I just not passing the value in the right way?square-coat-62279
11/07/2021, 10:13 AMif(helmConfigs.secret){
let k8sSecret = new k8s.core.v1.Secret(${helmConfigs.secret.name}, {
metadata: {
name: helmConfigs.secret.name,
namespace: helmConfigs.namespace
},
data: {
helmConfigs.secret.key: helmConfigs.secret.value
}
});
}
input:
helm: {
"cert-manager": {
namespace: "cert-manager",
version: "v1.5.3",
repo: "<https://charts.jetstack.io>",
chart: "cert-manager",
values: require(`./google/helm_values/cert_manager`),
secret: {}
},
"external-dns-public": {
namespace: "external-dns",
version: "3.1.1",
repo: "<https://charts.bitnami.com/bitnami>",
chart: "external-dns",
values: require(`./google/helm_values/external_dns_public`),
secret: {
name: "external-dns-public"
key: "credentials.json"
value: "dnskey.privateKey"
}
}
}
brainy-appointment-20633
11/07/2021, 8:29 PMbroad-helmet-79436
11/08/2021, 10:19 AMkubectl edit
.
My StatefulSet is called prometheus-grafana
, and runs in a namespace called monitoring
.
The Kubernetes cluster runs in GCP on version 1.20.10-gke.1600.
kubectl
tells me a few things (of particular interest, that the apiVersion
is apps/v1
):
$ kubectl get statefulset --context dev -n monitoring prometheus-grafana -o json
{
"apiVersion": "apps/v1",
"kind": "StatefulSet",
[…]
}
when I try to import the resource with Pulumi, however, the preview shows me some really old data:
// index.ts
new kubernetes.apps.v1.StatefulSet(
'prometheus-grafana',
{},
{ import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
$ pulumi preview --stack dev --diff
[…]
= ├─ kubernetes:apps/v1:StatefulSet prometheus-grafana import [diff: -spec~apiVersion,metadata]; 1 warni
= kubernetes:apps/v1:StatefulSet: (import)
[id=monitoring/prometheus-grafana]
[urn=urn:pulumi:dev::folio::kubernetes:apps/v1:StatefulSet::prometheus-grafana]
[…]
[provider=urn:pulumi:dev::folio::pulumi:providers:kubernetes::kubernetes_provider::a326d7af-28d0-4aa9-b5e1-7017e0244985]
~ apiVersion: "apps/v1beta2" => "apps/v1"
[…]
This is especially surprising because I’m using new kubernetes.apps.v1.StatefulSet(…)
, which I believe shouldn’t see anything with apiVersion
apps/v1beta2
.
From the looks of it, the Pulumi preview shows me the very first version of the statefulset that was deployed three years ago: I’ve also modified one of the containers to add an environment variable, and updated the Docker image multiple times using kubectl edit
.
I figured that was really weird, so I decided to log the imported StatefulSet to see if it’s somehow an issue with the diffing and/or preview:
const statefulSet = new kubernetes.apps.v1.StatefulSet(
'prometheus-grafana',
{},
{ import: 'monitoring/prometheus-grafana', provider: kubernetesProvider }
);
statefulSet.apiVersion.apply(apiVersion =>
console.log({ apiVersion })
);
$ pulumi up --stack dev
[…]
Diagnostics:
pulumi:pulumi:Stack (folio-dev):
{ apiVersion: 'apps/v1' }
I double-checked that this code is indeed printing the imported resource by logging the metadata
and containers as well. It sure seems to me like the code is doing what I intended.
On a related note, I also see the correct data with this code:
const res = kubernetes.apps.v1.StatefulSet.get(
'p-g',
'monitoring/prometheus-grafana',
{ provider: kubernetesProvider }
);
res.apiVersion.apply(apiVersion => console.log({ apiVersion }));
If I’m right so far, I believe it means that the imported resource has its apiVersion
set to apps/v1
, but the Pulumi resource differ and/or preview for some reason believe they’re working with the Very First Version of the StatefulSet that was deployed in 2018.
I know that the Kubernetes API does some caching, particularly on requests to watch
a resource, by checking the resourceVersion
property on the objects. I verified that the StatefulSet’s resourceVersion
gets updated when I change the container spec’s image
with kubectl edit
, though.
I have no idea if it matters, but the StatefulSet was deployed by Cloud Marketplace (https://console.cloud.google.com/marketplace/product/google/prometheus), which means it has some ownerReferences
set. Still, I don’t really see that triggering this behaviour, since my diagnostics/logs show that Pulumi has the correct data to work with.
I’ve put quite a few hours into debugging this, and now I’m at a complete loss. Am I missing something obvious? 😅sparse-spring-91820
11/09/2021, 9:42 AMconst nginx = new k8s.helm.v3.Chart('nginx',
{
namespace,
chart: 'nginx-ingress',
version: '1.24.4',
fetchOpts: { repo: '<https://charts.helm.sh/stable/>' },
values: {
controller: {
annotations: {
'<http://service.beta.kubernetes.io/aws-load-balancer-ssl-cert|service.beta.kubernetes.io/aws-load-balancer-ssl-cert>': 'arn:aws:acm:us-east-1:XXXXXXXXXXXX:certificate/XXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
'<http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>': 'alb',
'<http://service.beta.kubernetes.io/aws-load-balancer-backend-protocol|service.beta.kubernetes.io/aws-load-balancer-backend-protocol>': 'http',
'<http://service.beta.kubernetes.io/aws-load-balancer-ssl-ports|service.beta.kubernetes.io/aws-load-balancer-ssl-ports>': 'https'
},
publishService: { enabled: true }
}
}
},
{ providers: { kubernetes: options.provider } }
);
I tried a lot of variations but none of them worked for me. I end up getting error: 400 Bad Request "Play HTTP request was sent to HTTPS port"
or another case, I get auto-generated Kubernetes Ingress Controller Fake Certificate
which shows me Not secure
flag in the browser because that certificate is not signed by authority that browser trusts.
Has anyone else set nginx-ingress working with certificate generated by ACM?limited-rainbow-51650
11/09/2021, 11:13 AMDeployment
using a transformation? I seem to fail on setting it the correct way.colossal-car-2729
11/09/2021, 3:37 PMk8s.helm.v3.chart
when switching from LocalChartOpts
to ChartOpts
(so instead of using a chart in a local repository using some upstream repo).
In my values.yaml is a line like this:
{{ .Files.Get “files/fluentd-prometheus.conf” | indent 4 }}
The file is read with LocalChartOpts
but is just empty with ChartOpts
. I assume the working directory with ChartOpts
is neither where pulumi up
is executed nor where the values.yaml file is located. Any idea one this?modern-city-46733
11/09/2021, 5:06 PMError: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "aws-load-balancer-webhook" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>" must equal "aws-load-balancer-controller": current value is "release-xxxxx"
witty-belgium-75866
11/11/2021, 10:50 AMaws-load-balancer-controller
chart on my EKS cluster, by using pulumi ( Python ).
The installation goes well, but every time I'm executing pulumi up
or pulumi preview
, some of its resources are being changed:
it's very unnecessary and time consuming.
anybody encountered that? thx!big-potato-91793
11/11/2021, 9:00 PMDiagnostics:
kubernetes:<http://networking.k8s.io/v1:Ingress|networking.k8s.io/v1:Ingress> (ingress-name):
error: 1 error occurred:
* the Kubernetes API server reported that "namespace/ingress-name" failed to fully initialize or become live: Ingress.extensions "ingress-name" is invalid: spec.rules[0].http.paths[0].backend: Invalid value: "": resource or service backend is required
If, i rerun back my pipeline everything pass without any problem.
Any idea?nutritious-petabyte-61303
11/12/2021, 2:25 AMnutritious-petabyte-61303
11/12/2021, 2:25 AMnutritious-petabyte-61303
11/12/2021, 2:26 AMnutritious-petabyte-61303
11/12/2021, 2:26 AMnutritious-petabyte-61303
11/12/2021, 2:26 AMnutritious-petabyte-61303
11/12/2021, 2:26 AMnutritious-petabyte-61303
11/12/2021, 2:26 AMnutritious-petabyte-61303
11/12/2021, 2:27 AMnutritious-petabyte-61303
11/12/2021, 2:28 AMmammoth-honey-6147
11/12/2021, 2:23 PMCertificate, err := apiextensions.NewCustomResource(ctx, "cert", &apiextensions.CustomResourceArgs{
// Various fields, etc
}
Which works perfectly fine - However, I need to effectively "wait" until the cert is in ready
state - I'm using a DNS challenge method so it can take ~60-90 seconds. I have another app that references the cert that's generated via the corresponding secret, which, I believe, isn't in a fully formed state as the app reads the secret before the entire chain is fully complete. Therefore, how can I facilitate this wait in my code?
I could do this by inspecting the status
field of the API object but it looks like I can't do that with the current SDKquick-television-97606
11/12/2021, 11:25 PMnutritious-petabyte-61303
11/15/2021, 3:49 PMnutritious-petabyte-61303
11/15/2021, 3:49 PMglamorous-australia-21342
11/16/2021, 12:45 AMconfigured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
. We have a role that is defined in pulumi rolemappings
for the cluster and I can confirm the IAM role is in aws-auth configmap. Anyone can auth to the cluster with k9s or kubectl using this role, but if anyone tries to do pulumi commands we get this auth error above. I can do anything beause I created the cluster I guess. I even tried using a different test account and have no issues, but my colleagues and my CI is broken. I am not using a providercredentialopts so pulumi is using whatever profile is in the default profile.glamorous-australia-21342
11/16/2021, 12:45 AMconfigured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
. We have a role that is defined in pulumi rolemappings
for the cluster and I can confirm the IAM role is in aws-auth configmap. Anyone can auth to the cluster with k9s or kubectl using this role, but if anyone tries to do pulumi commands we get this auth error above. I can do anything beause I created the cluster I guess. I even tried using a different test account and have no issues, but my colleagues and my CI is broken. I am not using a providercredentialopts so pulumi is using whatever profile is in the default profile.brave-ambulance-98491
11/16/2021, 1:08 AMcreationRoleProvider
when you created the cluster? I ran into this issue when I first set up EKS and my solution was to use this parameter with a shared admin role. If you don't do this, the cluster is created with whatever your current AWS credentials are.glamorous-australia-21342
11/16/2021, 5:21 PMbrave-ambulance-98491
11/16/2021, 5:28 PMcreationRoleProvider
and updating the program.glamorous-australia-21342
11/16/2021, 6:29 PM