This message was deleted.
# general
s
This message was deleted.
p
[Full disclaimer - I'm not an official rep for Pulumi] I had a similar issue recently and resolved it by limiting the concurrency of the deployment through github actions (so the images are built in parallel and deployments are left for last + done synchronously). https://github.com/pulumi/pulumi/issues/2073 If you're already doing things synchronously and are sure that nothing else could be running, I guess you could use
pulumi cancel
- but this feels like a rather ugly hack. I've also seen some cases where the pipeline was cancelled during the
pulumi up
, which results in a "broken" state for a period of time. The only solution I've found so far is
pulumi cancel
for this too. The solutions I listed above are definitely not ideal, so I'm also interested in any (other) potential solution to this.[
a
Thanks. We are already using GitHub concurrency control to limit the number of jobs that can deploy to an environment at any one time. I'll try out the cancel logic, just to see if it will make any difference. Though it does seem like the wrong axe to use here.
p
Are you storing the state on your side (in a bucket or something similar) or are you relying on the pulumi service? I'm asking because if it's the former, maybe there's a potential issue with reading/writing to the state "at the wrong time".
a
We currently use the Pulumi service for this...
e
I’m having the same issue as this but consistently rather than occasionally. Every deployment fails with this and if I re-run the job it works. It’s one of my biggest pain points right now.