Hi ! I had to change my kubeconfig and now many of...
# kubernetes
b
Hi ! I had to change my kubeconfig and now many of my stacks are completly lost as they’re trying to reach my cluster with old kubeconfig. Have you any idea how could I fix my stacks ?
m
First, have a look at https://github.com/pulumi/pulumi-kubernetes/issues/2745. A fix for this has landed in alpha just last week and might be helpful for you. You will have to provide a current kubeconfig to your Kubernetes provider. What exactly do you mean when you say "I changed my kubeconfig"? Where and how did you change it?
b
In my cloud provider I’ve manually reset it
m
How did you feed it to the Kubernetes provider originally?
b
It was created automatically by the cloud provider at cluster creation
cluster creation created with pulumi
m
It's a bit difficult to guide you with so little information. Are you using a managed Kubernetes service? If so, which cloud provider are you on? Are you using the default Pulumi Kubernetes provider or do you instantiate a custom provider within your code (see the docs)?
tl;dr is that you'll have to update the kubeconfig in your stack