I have a bug in my pulumi code where a k8s custom ...
# kubernetes
r
I have a bug in my pulumi code where a k8s custom resource (argo events sensor) was created without explicitly specifying a provider. Basically in this case it will use whatever the current k8s context as the provider. This leads to corruption (accidentally overwritten) in our production k8s cluster when we meant to deploy to a test cluster. Now, when I try to fix with explicit k8sprovider using "pulumi up", it try to replace the resource, but I got this error: error: resource xxxxx was not successfully created by the Kubernetes API server : xxxxxxx already exists I guess this happened because the replacement is try to create a new one first, and delete the old one. This of course will not work. What is best way to recover from this situation? Should pulumi be smart enough to detect that the explicitly specified k8s provider is the same as the default one used before and just simply ignore this?
b
if I'm understanding correctly, you'll have to remove the offending resource from the state using
pulumi state delete
we do have open issues about better handling of this which I'll try dig out
👍 1
r
I fixed it by manually delete the k8s resources using kubectl delete, followed by "pulumi refresh" and "pulumi up". I will try "pulumi state delete" the next time something similar happens.
b
awesome, by deleting the resources in the cluster and doing
pulumi refresh
you achieved a very similar result! glad it got sorted
r
Thanks @billowy-army-68599. BTW, is there a way to specify default k8s cluster in the Pulumi.xxx.yaml configuration file much like we do with aws:profile?
b
yep! you can set these globally with
pulumi config set kubernetes:xxx
https://www.pulumi.com/docs/reference/pkg/kubernetes/provider/#inputs
r
Great, thanks!
b
I also think you may have been able to do this with “delete before replace” for this particular resource, and then remove that later.
👍 1