https://pulumi.com logo
#general
Title
# general
g

gorgeous-minister-41131

11/16/2021, 7:06 PM
Is there a good way to handle state / issues when pulumi is forcefully exited in the middle of a run? Having an operator have to manually export, examine and reimport the state seems goofy especially in a CI pipeline for Kubernetes resources. Is there a way to just tell pulumi when --refresh is passed to ignore an inconsistent state and take what it sees out in the cluster/provider as face value?
One thought is we create some sort of administrative job that lets users enter a project and a stack so it does the export, removal of pending_operations, and re-import for you, and we move on with our lives. I know Pulumi is doing this for safety, but man, it’s a deal-breaker in any form of continous/automation and it seems more like Pulumi should just do a better job of error correction or sanity checking if possible.
let’s heat this up 🔥 🙂
l

little-cartoon-10569

11/16/2021, 8:51 PM
pulumi cancel
is your friend 🙂
g

gorgeous-minister-41131

11/17/2021, 12:34 AM
pulumi cancel doesn’t seem to apply in this situation though unless that cleans up pending_operations…
l

little-cartoon-10569

11/17/2021, 12:42 AM
No, it's what you use to less-dangerously force exit in the middle of a run.
g

gorgeous-minister-41131

11/17/2021, 7:34 PM
Right — that scenario works good if the automated pipeline traps the SIGINT properly and the gitlab runner you’re using kills the proc mid-run due to someone pushing the cancel button it 😛 That, and network inconsistencies are a real thing. I think pulumi was trying to take the safe approach here, but with terraform I never encountered this since TF would just default to ignoring pending changes during a refresh operation during plan no matter what due to the way it functioned. The pulumi issue I shared above describes in depth a possible option to tell Pulumi to behave in a similar fashion which I think is a good opt-in feature.