better-shampoo-48884
03/26/2021, 1:25 PMstocky-student-96739
03/29/2021, 9:29 PMkubernetes:apps/v1:Deployment (web):
error: 2 errors occurred:
* the Kubernetes API server reported that "my-application/web-v845050y" failed to fully initialize or become live: 'web-v845050y' timed out waiting to be Ready
* Attempted to roll forward to new ReplicaSet, but minimum number of Pods did not become live
If I watch the cluster I can see all of the pods (2) in the RS stand up and become Ready, and the ReplicaSet/Deployment reports them all as up + Ready/Up-to-Date inside of 2 minutes. This is happening for every Deployment I have configured on this cluster.
Tried latest @pulumi/kubernetes
Node module, Iβm on latest Pulumi CLI binary, on EKS 1.19. I tried blowing everything up and redeploying. Nothing of note when describing the Deployment or ReplicaSet. Itβs like the Pulumi client is just ignoring the state of the ReplicaSet.
Any assistance would be appreciated, thereβs very little out there on the Googles other than what Iβve already tried.ancient-megabyte-79588
03/29/2021, 11:20 PMancient-megabyte-79588
03/29/2021, 11:20 PMimport
and an import
of the resource group and and the cli always shows as wanting to replace it. The details
(when trying to import
)doesn't seem to indicate why it wants to replace it.
=>azure:core/resourceGroup:ResourceGroup: (import-replacement) π
[urn=urn:pulumi:dpts-shared::KubernetesCluster::azure:core/resourceGroup:ResourceGroup::rg-kubernetes]
[provider=urn:pulumi:dpts-shared::KubernetesCluster::pulumi:providers:azure::default_3_52_1::04da6b54-80e4-46f7-96ec-b56ff0331ba9]
name: "rg-kubernetes"
+-azure:core/resourceGroup:ResourceGroup: (replace) π
[id=/subscriptions/ad73ec2e-0337-4de5-983c-3944fcb68be8/resourceGroups/rg-kubernetes]
[urn=urn:pulumi:dpts-shared::KubernetesCluster::azure:core/resourceGroup:ResourceGroup::rg-kubernetes]
[provider: urn:pulumi:dpts-shared::KubernetesCluster::pulumi:providers:azure::default_3_6_1::09b02d81-f05d-4347-a5d2-831be11283e0 => urn:pulumi:dpts-shared::KubernetesCluster::pulumi:providers:azure::default_3_52_1::output<string>]
id : "/subscriptions/ad73ec2e-0337-4de5-983c-3944fcb68be8/resourceGroups/rg-kubernetes"
location: "centralus"
name : "rg-kubernetes"
Do you have any insights as to how I can get my pulumi app to not touch this resource-group? I've added the protect
statement to it and I've also added a lock
in the Azure Portal. I do NOT want to accidentally delete all of our control planes and subsequently delete all of the clusters.ancient-megabyte-79588
03/29/2021, 11:32 PMsticky-match-71841
03/30/2021, 11:37 AMhandsome-state-59775
03/30/2021, 2:55 PMerror: resource default/azure-secret was not successfully created by the Kubernetes API server : secrets "azure-secret" already exists
but:
$ KUBECONFIG=./kube.yaml k get secret azure-secret -n default
Error from server (NotFound): secrets "azure-secret" not found
any leads for debugging this? kube.yaml is from: p stack output kubeconfig --show-secrets > kube.yaml
handsome-state-59775
03/30/2021, 4:08 PMbusy-soccer-65968
03/31/2021, 4:47 PMapiVersion
on an ingress is different than what it actually is. Basically in reality my ingress is extensions/v1beta1
but when I refresh pulumi changes state to <http://networking.k8s.io/v1|networking.k8s.io/v1>
and then when I preview it shows it wanting to change it to extensions/v1beta1
which it already is... I'm also seeing this Ingress has at least one rule that does not target any Service. Field '.spec.rules[].http.paths[].backend.serviceName' may not match any active Service
which is also around this helmchart and ingress. I found an issue form 2020 around this. Basically then it was simply a misconfigured helmchart. However, I have confirmed that my labels are correct and my label selectors all line up.limited-rain-96205
03/31/2021, 8:04 PM--replace
requires that you specify each resource, but there are quite a lot, I just want it to clobber everything.limited-rainbow-51650
04/01/2021, 12:01 PMProvider
class but that doesnβt expose the info. Is there a standard way of getting this using the pulumi-kubernetes
provider?quiet-motorcycle-76742
04/01/2021, 3:50 PMpulumi preview
is convinced that it needs to delete all the resources deployed by that chart:
ββ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> aws-load-balancer-controller
- ββ kubernetes:<http://rbac.authorization.k8s.io/v1:Role|rbac.authorization.k8s.io/v1:Role> default/aws-load-balancer-controller-leader-election-role delete
- ββ kubernetes:core/v1:Secret default/aws-load-balancer-tls delete
- ββ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> aws-load-balancer-controller-rolebinding delete
- ββ kubernetes:core/v1:ServiceAccount default/aws-load-balancer-controller delete
- ββ kubernetes:<http://rbac.authorization.k8s.io/v1:RoleBinding|rbac.authorization.k8s.io/v1:RoleBinding> default/aws-load-balancer-controller-leader-election-rolebinding delete
- ββ kubernetes:core/v1:Service default/aws-load-balancer-webhook-service delete
- ββ kubernetes:<http://admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration|admissionregistration.k8s.io/v1:ValidatingWebhookConfiguration> aws-load-balancer-webhook delete
- ββ kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> aws-load-balancer-controller-role delete
- ββ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> aws-load-balancer-webhook delete
- ββ kubernetes:apps/v1:Deployment default/aws-load-balancer-controller delete
- ββ kubernetes:<http://apiextensions.k8s.io/v1beta1:CustomResourceDefinition|apiextensions.k8s.io/v1beta1:CustomResourceDefinition> targetgroupbindings.elbv2.k8s.aws delete
If you actually go through with the pulumi up
though, it (correctly) leaves all those resources alone. Has anyone seen anything like this recently? I saw a few old issues about pulumi preview
being wrong, but none that were presently open.glamorous-australia-21342
04/01/2021, 7:20 PMup
on an existing cluster in EKS. I determined that we needed to associate an AWS IAM Role with a Kubernetes group in order for us to connect to each other's clusters. Now however after changing the CI from the original IAM user to a service account that assumes the role we get the following error on up
.
Configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
We have our Pulumi code outputting the kubeconfig file and its the same one I am currently connected with so it can't be that the cert is expired or the kubeconfig is invalid. Any help is appreciated.handsome-state-59775
04/04/2021, 9:16 AMhandsome-state-59775
04/05/2021, 5:19 AMerror: resource ****/serviceAccount-****-ge0e5qf8 was not successfully created by the Kubernetes API server : ServiceAccount in version "v1" cannot be handled as a ServiceAccount: v1.ServiceAccount.ImagePullSecrets: []v1.LocalObjectReference: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|ecrets":["****/|..., bigger context ...|{"apiVersion":"v1","imagePullSecrets":["****/regcred"],"kind":"ServiceAccount","metad|...
any insights? code as follows:handsome-state-59775
04/06/2021, 4:15 AMkubectl -n $NAMESPACE patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
?
(azure-natice, python, pulumi_kubernetes)better-shampoo-48884
04/06/2021, 6:22 AMconst keyvaultCSI = new k8s.helm.v3.Chart("keyVaultCSI",{
chart: "csi-secrets-store-provider-azure",
version: "0.0.17",
fetchOpts: {
repo: "<https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts//>"
},
values: {
logFormatJSON: true,
}
},{
provider: cluster
})
And getting this as an error:
pulumi:pulumi:Stack baseline-k8s-dev.k8s.infratesting create error: Unhandled exception: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: no cached repo found. (try 'helm repo update'):
Basically following this instruction for installation: https://azure.github.io/secrets-store-csi-driver-provider-azure/getting-started/installation/wet-noon-14291
04/06/2021, 8:47 PMlast-applied-configuration
compared to what is actually deployed? I have a case where an environment variable, with a reference to a secret is defined in last-applied-configuration
, but it is not under the actual spec. What's weird is that it is just this one variable, another variable defined exactly the same way on the line above in pulumi is there in both places.better-shampoo-48884
04/07/2021, 7:31 AMbetter-shampoo-48884
04/07/2021, 7:39 AMbumpy-laptop-30846
04/08/2021, 12:32 PMbetter-shampoo-48884
04/08/2021, 3:36 PMbetter-shampoo-48884
04/08/2021, 3:39 PMunhandled rejection: CONTEXT(1168): Invoking function: tok=kubernetes:helm:template asynchronously
STACK_TRACE:
Error
at Object.debuggablePromise (c:\<path>\node_modules\@pulumi\pulumi\runtime\debuggable.js:69:75)
at c:\<path>\node_modules\@pulumi\pulumi\runtime\invoke.js:126:45
at Generator.next (<anonymous>)
at fulfilled (c:\<path>\node_modules\@pulumi\pulumi\runtime\invoke.js:18:58)
at processTicksAndRejections (node:internal/process/task_queues:94:5)
Searching around has shown similar messages when helm is unable to output something - but there is no error message outside of this. Also - this is on the stack.preview() step (using automation)
edit: grmbl.. may have found the culprit - by going through each helm resource one by one and making sure its the only one commented out, I finally got a proper error message. So strange that I otherwise only get a bunch of these and otherwise nothing else..handsome-state-59775
04/09/2021, 8:54 AMbumpy-laptop-30846
04/09/2021, 10:08 AMexport const hostname = ambassador.getResourceProperty("v1/Service", "ambassador", "status")
but status is not found. Where as with
kubectl get svc ambassador -o yaml
I get an output with a status.
Is it normal that pulumi does not find the info juste after the creation of the chart?better-shampoo-48884
04/10/2021, 8:23 AMββ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> akv2k8s
~ β ββ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> akv2k8s/akv2k8s-envinjector update [diff: ~webhooks]
+- β ββ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-tls replace [diff: ~data]
+- β ββ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-ca replace [diff: ~data]
~ β ββ kubernetes:apps/v1:Deployment akv2k8s/akv2k8s-envinjector update [diff: ~spec]
Edit: neeevermind! there is a tiny diff in the certificates generated.. a bit frustrating, but of no consequence. Was almost certain there might have been some encoding issues triggering the diff or something, but no - new certs are generated by the chart every time it's touched. oh well.better-shampoo-48884
04/10/2021, 8:33 AMββ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart> akv2k8s
++ β ββ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-ca created replacement [diff: ~data];
++ β ββ kubernetes:core/v1:Secret akv2k8s/akv2k8s-envinjector-tls created replacement [diff: ~data];
~ β ββ kubernetes:apps/v1:Deployment akv2k8s/akv2k8s-envinjector updated [diff: ~spec]; Deployment initialization
~ β ββ kubernetes:<http://admissionregistration.k8s.io/v1:MutatingWebhookConfiguration|admissionregistration.k8s.io/v1:MutatingWebhookConfiguration> akv2k8s/akv2k8s-envinjector updated [diff: ~webhooks]
How do I match that?better-shampoo-48884
04/12/2021, 5:53 AMcuddly-dusk-95227
04/14/2021, 11:36 AMpulumi-eks
Β . what am I missing?handsome-state-59775
04/14/2021, 4:58 PMhandsome-state-59775
04/14/2021, 4:58 PMbillowy-army-68599
04/14/2021, 5:02 PMhandsome-state-59775
04/14/2021, 5:38 PMkubectl
call to patch serviceaccount with image pull secrets), but passing the kubeconfig will require a refactor. just wanted to see if that could be avoidedbillowy-army-68599
04/14/2021, 5:40 PM