I have a bug in my pulumi code where a k8s custom resource (argo events sensor) was created without explicitly specifying a provider. Basically in this case it will use whatever the current k8s context as the provider. This leads to corruption (accidentally overwritten) in our production k8s cluster when we meant to deploy to a test cluster. Now, when I try to fix with explicit k8sprovider using "pulumi up", it try to replace the resource, but I got this error:
error: resource xxxxx was not successfully created by the Kubernetes API server : xxxxxxx already exists
I guess this happened because the replacement is try to create a new one first, and delete the old one. This of course will not work. What is best way to recover from this situation? Should pulumi be smart enough to detect that the explicitly specified k8s provider is the same as the default one used before and just simply ignore this?