Hi folks, does anyone know how to resolve the stat...
# python
w
Hi folks, does anyone know how to resolve the state of Pulumi stack state after a timeout. I was running pulumi up through a ci/cd runner, the runner time'd out while a cloudfront distribution was in progress of being deployed, on the next ran, the cloudfront distro is deployed but is not in Pulumi state, so pulumi tries to create it again and fails with a message of resource already exist. I know it is possible to just delete the resource on aws and run pulumi again but was wondering if there is any other time saving option and leaning towards automation. Thanks
f
so you'd export it and update the resource and import it back in.... you may also be able to use the import functionality here on your resource itself demonstrated here: https://www.pulumi.com/blog/adopting-existing-cloud-resources-into-pulumi/
b
what i do a lot of time, is simply create the same thing again in another stack (different URL) and then export that stack, and edit the json parameters (search and replace URL, and the distro ID) and then import that other fully complete stack
or from stage->prod, that sort of thing
after a refresh it will usually iron it all out smoothly
w
Thanks guys, I will give it a go and let you know how I get on. And just for reference when the deployment failed, on the next ran I added pulumi cancel -to my script to cancel the pending deployment - followed by pulumi stack export | pulumi stack import this due to me trying to automate the process and just run a script that cleans the state and runs the deployment again. But I know see this can not be 100% automated. Some manual intervention is required to recover failed deployments