:thinking_face: `BucketObjectV2` resource silently...
# aws
a
🤔
BucketObjectV2
resource silently overwrites an existing s3 object at the target key. Is this expected?
Copy code
# YAML for a test
...
resources:
  random-data:
    type: aws:s3:BucketObjectv2
    properties:
      bucket: somebucket
      key: /path/to/a/file/that/was/already.there
      content: "Death to whatever was there previously!"
...
l
Yes, it's an object store that works in the same way as a file system. If you write to a file that exists, the new file overwrites the old file. You can get some sort of protection from this by enabling object versioning in the bucket; if you do that, you should explore lifecycle rules too, since old versioned objects take up the same space as regular objects, but are much harder to find and might just sit there, costing you space. You can also turn on write-once-read-many for individual objects, using Object Lock. https://www.pulumi.com/registry/packages/aws/api-docs/s3/bucketobjectv2/#s3-object-lock https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
q
AWS actually supports conditional writes for S3 now, which enables "do-not-overwrite"-scenarios. If you're interested in support for this, please upvote this issue in the upstream Terraform provider (pulumi-aws is based on that one): https://github.com/hashicorp/terraform-provider-aws/issues/38964
a
Coming from Azure, AWS's normal behaviour took me by surprise 🤓. It looks like object locks are on versions, and won't block the creation of a newer version at the same key. @quick-house-41860 thanks, it looks like conditional writes is exactly what I need. I have pretty much the same use case as already listed on the terraform provider issue: many teams dropping metadata files on the bucket and I want error if there is already something there.