bailing for the evening, but I'll keep tabs on thi...
# general
p
bailing for the evening, but I'll keep tabs on this in case someone has ideas. state handling during failures definitely seems to be in strong need of improvement, this is the second of this sort I've run into, effectively leaving me in a "delete it all by hand and start over" state.
w
I’ve opened https://github.com/pulumi/pulumi/issues/2801 to track this. We’ll look into it. Do you have details on the other issue you saw?
p
if I recall, it was around
Copy code
error: the current deployment has 1 resource(s) with pending operations:
  * urn:pulumi:dev::aws-ts-hello-fargate::awsx:x:elasticloadbalancingv2:ApplicationLoadBalancer$aws:elasticloadbalancingv2/loadBalancer:LoadBalancer::371ba2bf, interrupted while creating

These resources are in an unknown state because the Pulumi CLI was interrupted while
waiting for changes to these resources to complete. You should confirm whether or not the
operations listed completed successfully by checking the state of the appropriate provider.
For example, if you are using AWS, you can confirm using the AWS Console.

Once you have confirmed the status of the interrupted operations, you can repair your stack
using 'pulumi stack export' to export your stack to a file. For each operation that succeeded,
remove that operation from the "pending_operations" section of the file. Once this is complete,
use 'pulumi stack import' to import the repaired stack.

refusing to proceed
(dug in my scrollback)
b
if you look in the stack state file there's a block at the end "pending" iirc, you can just rip those out to have it stop complaining about that and avoid doing the 'pulumi stack export' thing. i've had that too where a deployment went over the 60min azure devops pipeline limit and was cancelled and essentially was left in a totally broken state
p
yeah, tried that. stayed broken.
may have changed the broken.
that said - pulumi should get smarter... 😉