Did something change with how pulumi handles s3 bu...
# aws
b
Did something change with how pulumi handles s3 buckets? I created a bucket in stack A and then tried to create the same bucket in stack B. I fully expected stack B to fail and to have to import the bucket but instead it “created” the bucket and took it over from stack A. I have never seen this happen before and am concerned if I can hijack resources from other stacks without importing them. Has anyone encountered this behavior before? In the past I’ve always had to import a resource if it already exists.
b
This isn’t how Pulumi works and is highly irregular. Can you provide more information about what happened here? Ideally, full step by step details about what you did
b
I agree, I’m extremely confused by it. So I originally had a project deploying an S3 bucket and some other “baseline” resources to an account. I think abstracted the resources out to a separate project so I could deploy them to multiple accounts. When I ran the new project and expected the resources for the account that already had the resources deployed to it to fail, but it succeeded and has the bucket (and other resources) in it’s state
I even tried to create a second stack off of the original project to see if I could force it to fail and got the same results
b
How are you defining the bucket?
What are the buckets that you defined called?
(Ie: how do you they appear in the API/console)
b
I’m using the basic plulumi class-aws provider and creating them blank so far. They are all called the same name, which is why I expected it to fail. In pulumi cloud, the bucket is clearly in every state, but they all link to the exact same bucket in AWS.
I tried to update my aws provider to the latest version to check that off, and see the same results
woah even if I use the awscli to try and create the same bucket it “works”
and I don’t get the usual 400 error
I am reaching out to AWS support cause it seems to be on the AWS side.
b
Can you please show the code you’re using? I suspect you’re running into autonaming. The resource name in pulumi is not the name the resource gets given in the api, we append some uniqueness to the resource to prevent collisions
b
haha I promise I’m not 😆
When I try to create the same bucket using the awscli I get the same behavior
aws is acting like it’s creating it even though it already exists
b
🤯
b
it’s blowing my mind
b
Wow, let me know how that turns out. I’ve never see. Anything like it before
b
@billowy-army-68599 so I learned something new today. Apparently in us-east-1 s3 will still return a 200 if you own the bucket for “legacy compatibility” so idk how that should be handled in pulumi 😂
l
Does this not happen outside of us-east-1?
b
it does not
see
BucketAlreadyOwnedByYou
l
Suddenly I feel like a teenage girl. In as much as I'M TELLING EVERYONE I EVER MET, right now.
b
I feel like I’m in the twilight zone. I can’t have been the first to run into this. I guess to fix it in pulumi/terraform you would just check to see if the bucket exists before you try to create it.
l
My recommendation is to recognize that everything in us-east-1 crashes x10 as much as in any other region, and accordingly to never use it.
We have to rely on it for IAM, billing and more, so reduce the load there and use other regions.
b
true. If I was around when the infrastructure was originally provisioned I would’ve recommended that 🙃
I don’t see any issues logged before in pulumi but it seems like it could be pretty rough if you aren’t aware of that. I wonder if there could be an input specifically for this case.
I wonder if that behavior wasn’t bridged to pulumi
it was merged July 2022
l
I like this:
I accidently used the same bucket name in two Terraform workspaces, and Terraform didn't throw any errors. The two workspaces ended up overwriting each other's KMS key settings on the bucket, which caused lots of issues.
That's like saying "I encrypted the payroll file and destroyed the key. My staff didn't approve". Understatement much?
b
“10/10 would not recommend”
l
Ah damn I can't see the changelog for v4.24.. is that do to with Hashicorp's fun relicensing experiment?
Nmind it worked, something is just slow
b
I don’t think their providers are under the new license. I believe they are all open still
l
Confirmed that the correct Terraform code is in the upstream submodule.
It looks like pulumi-aws v5.13 or newer is required. Can you confirm you have that?
b
@blue-translator-21668 can you open an issue so we can track this?
b
@billowy-army-68599 for sure @little-cartoon-10569 yeah, I had 5.40 before and then upgraded to 6.X when troubleshooting