quiet-motorcycle-76742
04/01/2021, 3:50 PMpulumi preview
is convinced that it needs to delete all the resources deployed by that chart:
└─ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> aws-load-balancer-controller
- ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:Role|rbac.authorization.k8s.io/v1:Role> default/aws-load-balancer-controller-leader-election-role delete
- ├─ kubernetes:core/v1:Secret default/aws-load-balancer-tls delete
- ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> aws-load-balancer-controller-rolebinding delete
- ├─ kubernetes:core/v1:ServiceAccount default/aws-load-balancer-controller delete
- ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:RoleBinding|rbac.authorization.k8s.io/v1:RoleBinding> default/aws-load-balancer-controller-leader-election-rolebinding delete
- ├─ kubernetes:core/v1:Service default/aws-load-balancer-webhook-service delete
- ├─ kubernetes:<http://admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration|admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration> aws-load-balancer-webhook delete
- ├─ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> aws-load-balancer-controller-role delete
- ├─ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> aws-load-balancer-webhook delete
- ├─ kubernetes:apps/v1:Deployment default/aws-load-balancer-controller delete
- └─ kubernetes:<http://apiextensions.k8s.io/v1beta1:CustomResourceDefinition|apiextensions.k8s.io/v1beta1:CustomResourceDefinition> targetgroupbindings.elbv2.k8s.aws delete
If you actually go through with the pulumi up
though, it (correctly) leaves all those resources alone. Has anyone seen anything like this recently? I saw a few old issues about pulumi preview
being wrong, but none that were presently open.glamorous-australia-21342
04/01/2021, 7:20 PMup
on an existing cluster in EKS. I determined that we needed to associate an AWS IAM Role with a Kubernetes group in order for us to connect to each other's clusters. Now however after changing the CI from the original IAM user to a service account that assumes the role we get the following error on up
.
Configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
We have our Pulumi code outputting the kubeconfig file and its the same one I am currently connected with so it can't be that the cert is expired or the kubeconfig is invalid. Any help is appreciated.handsome-state-59775
04/04/2021, 9:16 AMhandsome-state-59775
04/05/2021, 5:19 AMerror: resource ****/serviceAccount-****-ge0e5qf8 was not successfully created by the Kubernetes API server : ServiceAccount in version "v1" cannot be handled as a ServiceAccount: v1.ServiceAccount.ImagePullSecrets: []v1.LocalObjectReference: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|ecrets":["****/|..., bigger context ...|{"apiVersion":"v1","imagePullSecrets":["****/regcred"],"kind":"ServiceAccount","metad|...
any insights? code as follows:handsome-state-59775
04/06/2021, 4:15 AMkubectl -n $NAMESPACE patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
?
(azure-natice, python, pulumi_kubernetes)better-shampoo-48884
04/06/2021, 6:22 AMconst keyvaultCSI = new k8s.helm.v3.Chart("keyVaultCSI",{
chart: "csi-secrets-store-provider-azure",
version: "0.0.17",
fetchOpts: {
repo: "<https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts//>"
},
values: {
logFormatJSON: true,
}
},{
provider: cluster
})
And getting this as an error:
pulumi:pulumi:Stack baseline-k8s-dev.k8s.infratesting create error: Unhandled exception: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: no cached repo found. (try 'helm repo update'):
Basically following this instruction for installation: https://azure.github.io/secrets-store-csi-driver-provider-azure/getting-started/installation/wet-noon-14291
04/06/2021, 8:47 PMlast-applied-configuration
compared to what is actually deployed? I have a case where an environment variable, with a reference to a secret is defined in last-applied-configuration
, but it is not under the actual spec. What's weird is that it is just this one variable, another variable defined exactly the same way on the line above in pulumi is there in both places.better-shampoo-48884
04/07/2021, 7:31 AMbetter-shampoo-48884
04/07/2021, 7:39 AMbumpy-laptop-30846
04/08/2021, 12:32 PMbetter-shampoo-48884
04/08/2021, 3:36 PMbetter-shampoo-48884
04/08/2021, 3:39 PMunhandled rejection: CONTEXT(1168): Invoking function: tok=kubernetes:helm:template asynchronously
STACK_TRACE:
Error
at Object.debuggablePromise (c:\<path>\node_modules\@pulumi\pulumi\runtime\debuggable.js:69:75)
at c:\<path>\node_modules\@pulumi\pulumi\runtime\invoke.js:126:45
at Generator.next (<anonymous>)
at fulfilled (c:\<path>\node_modules\@pulumi\pulumi\runtime\invoke.js:18:58)
at processTicksAndRejections (node:internal/process/task_queues:94:5)
Searching around has shown similar messages when helm is unable to output something - but there is no error message outside of this. Also - this is on the stack.preview() step (using automation)
edit: grmbl.. may have found the culprit - by going through each helm resource one by one and making sure its the only one commented out, I finally got a proper error message. So strange that I otherwise only get a bunch of these and otherwise nothing else..handsome-state-59775
04/09/2021, 8:54 AMbumpy-laptop-30846
04/09/2021, 10:08 AMexport const hostname = ambassador.getResourceProperty("v1/Service", "ambassador", "status")
but status is not found. Where as with
kubectl get svc ambassador -o yaml
I get an output with a status.
Is it normal that pulumi does not find the info juste after the creation of the chart?better-shampoo-48884
04/10/2021, 8:23 AM├─ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> akv2k8s
~ │ ├─ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> akv2k8s/akv2k8s-envinjector update [diff: ~webhooks]
+- │ ├─ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-tls replace [diff: ~data]
+- │ ├─ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-ca replace [diff: ~data]
~ │ └─ kubernetes:apps/v1:Deployment akv2k8s/akv2k8s-envinjector update [diff: ~spec]
Edit: neeevermind! there is a tiny diff in the certificates generated.. a bit frustrating, but of no consequence. Was almost certain there might have been some encoding issues triggering the diff or something, but no - new certs are generated by the chart every time it's touched. oh well.better-shampoo-48884
04/10/2021, 8:33 AM├─ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> akv2k8s
++ │ ├─ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-ca created replacement [diff: ~data];
++ │ ├─ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-tls created replacement [diff: ~data];
~ │ ├─ kubernetes:apps/v1:Deployment akv2k8s/akv2k8s-envinjector updated [diff: ~spec]; Deployment initialization
~ │ └─ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> akv2k8s/akv2k8s-envinjector updated [diff: ~webhooks]
How do I match that?better-shampoo-48884
04/12/2021, 5:53 AMcuddly-dusk-95227
04/14/2021, 11:36 AMpulumi-eks
. what am I missing?handsome-state-59775
04/14/2021, 4:58 PMmany-psychiatrist-74327
04/15/2021, 1:44 AMhandsome-state-59775
04/15/2021, 10:13 PMrhythmic-actor-14991
04/16/2021, 10:05 AMstraight-cartoon-24485
04/17/2021, 12:36 AMstraight-cartoon-24485
04/18/2021, 7:50 AMPulumi.<stack-name>.yaml
file, and am wondering if there is a better way to get the <stack-name>
to stay DRYkind-mechanic-53546
04/19/2021, 3:10 AMnew docker.Image(...
, with a registry: {server: {azure container registry login server here}
, I'm getting the error Error: No digest available for image {registry:tag}
This process works on windows with the exact same pulumi version and stack (but has other issues with WSL and file ownership, speed etc..)
Per this file, running docker image inspect -f {{.Id}} imageName
in a terminal gives a correct output and pushing it with docker push {registry:tag}
runs fine outside of pulumi
It looks like this was a prior issue here and here but i'm running v2.25.2 so I'm not sure where to gostraight-cartoon-24485
04/19/2021, 1:48 PMlimited-rainbow-51650
04/20/2021, 9:51 AMService
component resource. In our own abstraction, we create a NodePort based k8s Service
resource, retrieve the actual nodeport value from the service using apply and pass it on to a GKE specific BackendConfig
CRD. This is the code snippet:
const healthPortNumber: pulumi.Input<number> = this.service.spec.apply((spec) => {
const healthPort = spec.ports.find((port) => {
return port.name === 'health';
});
if (healthPort) {
return healthPort.nodePort;
} else {
return 4001;
}
});
We have a stack where the ports section of the Service
spec contains this (taken from the stack state):
{
"name": "health",
"nodePort": 30449,
"port": 4001,
"protocol": "TCP",
"targetPort": "health"
}
But we get undefined
as the value for healthPort.nodePort
. How is that possible??prehistoric-kite-30979
04/21/2021, 4:18 PMprehistoric-kite-30979
04/21/2021, 6:05 PMcolossal-plastic-46140
04/22/2021, 1:53 AM