Hey Guys a pretty basic question, I guess, but I h...
# kubernetes
Hey Guys a pretty basic question, I guess, but I have the interesting issue: Pulumi does sometimes not recognize resources created by itself. Especially my kubernetes "Service" gets detected as an pulumi unrelated resource. The thing is: nobody touched it. So at some point pulumi does not recognize the deployed kubernetes services and treats it as a new out of pulumi scope created resource. Then I get a "resource already exists" error. Every deployment done to this cluster was done using the same stack on s3. So I am wondering if anybody had similar issues, or if somebody knows what could be the cause of this? What I did do is that I did previews locally (but with login --cloud-url s3://) and deployments via CI. I usually always do pulumi up -r or at least a refresh before doing something though. Can the pulumi stack get corrupted and if yes by what?
It is of course not expected, but state files can get corrupted. This can happen if there are multiple
pulumi up
operations happening on the same state at the same time (the S3 backend currently cannot protect against this). This can also happen if something interrupts the
pulumi up
such as CTRL+C'ing to cancel it or a network interruption. It can also happen because of a bug in pulumi, but I've not heard of recent cases of this.
If you can share the output of
pulumi preview --diff
pulumi refresh
, we might be able to help troubleshoot why this is occurring.