I'm every now and then getting this kind of issue ...
# kubernetes
s
I'm every now and then getting this kind of issue during a stack refresh, and it ends up deleting these resources from the stack state. I'm not sure why this is happening, but I'm wondering if it is due to our OIDC cluster authentication setup with short lived access tokens & refresh tokens. 1. Could this error occur when my token for the cluster expires? 2. Should the refresh command really delete resources from state if fails connecting to the cluster? CLI wise it’s fine, I can just select no, but in some tools we use the automation api and then the refresh is performed anyway.
Copy code
configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`- couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
g
We just changed the deletion behavior in https://github.com/pulumi/pulumi-kubernetes/pull/1379 yesterday, so that should fix issue 2 for you. I’ll cut another release with that fix next week, or you can try it out with a dev build.
For issue 1, that looks more like the wrong data is being passed in for the kubeconfig. It should fail with an auth error if an expired token is the problem.
s
Awesome, thanks. Okay, regarding 1 sounds weird. While getting this error I’m having k9s running in another tab, using the same credentials from ~/.kube/config which is talking to the cluster fine. Does pulumi somehow extract the the config and saves it temporarily somewhere? Maybe it gets corrupted in this process?
To me it seems like this kubeconfig that is getting "corrupted" is somehow stored in the stack state? Because if it happens on one device, the same starts happening on other devices trying to provision resources for that stack as well.