We just changed the deletion behavior in https://github.com/pulumi/pulumi-kubernetes/pull/1379 yesterday, so that should fix issue 2 for you. I’ll cut another release with that fix next week, or you can try it out with a dev build.
gorgeous-egg-16927
11/13/2020, 11:40 PM
For issue 1, that looks more like the wrong data is being passed in for the kubeconfig. It should fail with an auth error if an expired token is the problem.
s
sticky-translator-17495
11/14/2020, 8:07 AM
Awesome, thanks. Okay, regarding 1 sounds weird. While getting this error I’m having k9s running in another tab, using the same credentials from ~/.kube/config which is talking to the cluster fine.
Does pulumi somehow extract the the config and saves it temporarily somewhere? Maybe it gets corrupted in this process?
sticky-translator-17495
11/16/2020, 10:34 AM
To me it seems like this kubeconfig that is getting "corrupted" is somehow stored in the stack state? Because if it happens on one device, the same starts happening on other devices trying to provision resources for that stack as well.