We’re having an issue with stack.refresh() and K8s...
# kubernetes
w
We’re having an issue with stack.refresh() and K8s credentials not updating. Our Pulumi-based deployments read the K8s credentials from disk and put them into the k8s.Provider before doing stack.refresh(). However, it seems to be using the old credentials in the K8s Provider’s backend state, rather than the current credentials which we just read into the provider. This causes it to fail whenever the K8s credentials expire or change. Is this expected? How can we force it to update the Provider’s creds?
b
The way we do this is we set up our kubeconfig to dynamically fetch credentials, and embed this into the
k8s.Provider
itself. That way, Pulumi will automatically refresh credentials in the same way
kubectl
will.
w
@brave-ambulance-98491 nice approach, I’ll try it. Thank you!
b
np!
w
@brave-ambulance-98491 how did you set your kubeconfig to dynamically fetch credentials? Do you have any example?
b
You need to use the
exec
block in the
user
section of your kubeconfig. There are some docs here: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-set-credentials-em-
If you're using a cloud-provided cluster, they typically have instructions for how to set this up, like here: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
w
Hello @brave-ambulance-98491. I'm using an on-prem cluster and I tried to fetch the credentials using kubeadm. Running on CLI I get the kubeconf successfully on stdout, but when I ran on exec I get the following error:
getting credentials: decoding stdout: no kind "Config" is registered for version "v1" in scheme "pkg/runtime/scheme.go:100"
. Do you have any suggestion about fetching credentials to an on-prem cluster?
b
I'm not sure about that one, I've not used
kubeadm
. It should be possible to configure a kubeconfig file that executes a binary to fetch credentials, but I don't know offhand the format for it.
w
@brave-ambulance-98491 oh, ok. Thanks a lot.