This message was deleted.
# general
s
This message was deleted.
l
Are you using the default AWS provider, or creating one? If you're creating one, are you passing in the correct region?
s
I am using the default aws provider
l
Ok then we're into the weird-side-cases part of the investigation 🙂 Are you running pulumi only from your local CLI? In particular, are you running it from a test framework, from a script or other wrapper code, or from a cloud pipeline or similar?
s
I'm running it from a pipeline (where the deployment failed) and trying to destroy from my local cli before rerunning the pipeline. I'm using the s3 backend
(specifically amazon codepipeline with codebuild to run pulumi)
l
And is the region set from your stack config file? (Pulumi.<stack>.yaml)? If it is, have you verified that the file in the pipeline looks the way you expect? There wouldn't be a line termination issue, for example?
s
Yep, looks correct in both places. It has config: awsregion us-west-2, and the line endings are LF everywhere
What's weird is that I only see this when destroying from my local box--the deploy failed for unrelated reasons, and I'd already destroyed and re-created this stack once before
We're in really early stages of bringing up our CI/CD
l
Bit confused: earlier you said "I'm running it from a pipeline (where the deployment failed)", but you've just said "I only see this when destroying from my local box". Is the pipeline all good, wrt this specific problem?
b
Hey @shy-oxygen-8874 are you sure that the stack you are destroying and the config file are 1:1 in their name convention?
s
Yeah, the pipeline is all good, my local cli is where I'm seeingthis
b
I wonder if you need to Pulumi stack select the correct stack name
s
@broad-dog-22463 yes, I'm sure. They're both the same file name.
And the currently selected stack is the same name
All three are {companyname}-platform-staging
b
Can you run “pulumi stack select” and make sure you can see the stack
s
Yep, it's there and selected
And when I run just pulumi stack, I see the resources that failed to delete
l
Looks to me like you've thought of everything. I'm stumped. Maybe clone to a fresh directory and try again? Or remove ~/.pulumi and see if that helps?
s
Just to close the loop here, I was eventually able to just roll forward and the problem took care of itself. Still, seems like a scary time.
l
Roll forward to a newer version of Pulumi?
s
Roll forward with a newer version of my infrastructure using pulumi up. The resources that couldn't delete were still tracked, so it all worked.
👍 1