There are two separate issues here - one bug and o...
# python
There are two separate issues here - one bug and one explanation of correct behaviour: The kubernetes provider not being usable in this case sounds like a bug to me (cc @gorgeous-egg-16927 and @creamy-potato-29402 who can investigate that one). The reason you can’t continue in some (not all) circumstances when an update/destroy is terminated non-gracefully is because of the per-stack state lock. The state lock exists such that two users can safely run
pulumi up
and know that their runs will be serialized. You don’t actually have to have two users for this to present, however - since there is no way for us to know when a second user is about to appear, the lock is used at all times. The “this doesn’t prevent” refers to a per-resource lock instead of a per-state lock.
I don’t understand exactly the k8s part of the scneario here. @little-river-49422
You’re saying if the kubernetes credentials are not available, you can’t do
pulumi up
they are available, they just point to non existing cluster, i think Luke understands the issue now
ok, rope me in as appropriate.