I’m using `aws.s3.Bucket.get()` and receiving a st...
# aws
m
I’m using
aws.s3.Bucket.get()
and receiving a strange error that seems to indicate it’s somehow related to stack references (which I’m not using in this project:
registered twice (read and read)
I’ve tried getting my Bucket 3 different ways:
Copy code
aws.s3.Bucket.get(bucketName, bucketName, { name: bucketName });
aws.s3.Bucket.get(bucketName, bucketName);
aws.s3.Bucket.get(bucketName);
Furthermore, it breaks the update, and tells me to import an exported stack. Now I have some buckets with 2-3 extra
-[a-z0-9]{7}
suffixes
w
I’m not sure this helps, but have you tried using the
aws.s3.getBucket()
function instead of using the
get
method? https://www.pulumi.com/docs/reference/pkg/aws/s3/getbucket/
l
getBucket
is the AWS SDK function, returning a GetBucketResult. It doesn't give a Pulumi Bucket object.
m
Copy code
aws.s3.Bucket.get(existingBucketId);
Implicitly creates a NEW bucket with another hash appended
l
It does? That's probably a bug.. the docs don't imply that, and no other resource works that way...
m
Yeah, feels like a bug to me
l
You should probably raise an issue on github. Once you've done that, you could try using
new aws.s3.Bucket(..., { /* args */ }, { /* opts */ import: bucketName });
and manage it through Pulumi.
m
The buckets are already managed through Pulumi, but I’m collecting a list of the buckets using the AWS SDK so I can generate cloudwatch alarms without exporting outputs from every “child” stack
l
Ah. I manage that the other way: export the appropriate CloudWatch IDs from the "parent" (in my lingo, "shared") project, and have each child project set up its own alarms. This has the advantage that when I destroy a stack, it cleans up CloudWatch itself.
m
Something like:
Copy code
const resourceList = await clients.s3.listBuckets()
      .promise();

    const service = {
      resources: {},
    };

    resourceList.Buckets!.forEach(resource => {
      const resourceName = resource.Name!.replace(/-[a-z0-9]{7}$/, "");
      const resourceId = resource.Name;
      // @ts-ignore
      service.resources[resourceName] = aws.s3.Bucket.get(resourceName, resourceId);
Not all the resources are managed by Pulumi, so I’m also collecting and reporting on those
l
Yea that's exactly what StackReferences help with. If you want to avoid the noise of those extra outputs and references, then maybe a lambda would work well too? That way, it can clean up after itself even if Pulumi isn't being run.
m
Upon further investigation, it appears as though the buckets that are causing the initial problem for me are in another region from the region in the Pulumi config.
l
A new provider will sort that.
m
I’m not sure it will because the AWS SDK s3.listBuckets does not tell me the region, and even with the region specified in the client, it returns the global list of buckets 😅
maybe there is a subsequent call I can make to get the bucket region, or… extract it from the ARN
l
Wow I didn't realize that getting a list of buckets was so hard.. guess I've never tried before.
m
Yeah, I’m collecting ~12 different resources to monitor. S3 has the work “non-standard” API. My coworker suggested it’s because S3 is the oldest service.
S3 is the only service giving me this much trouble.
Ultimately doing something like this as a work-around (little ugly):
Copy code
await Promise.all(resourceList.Buckets!.map(async resource => {
  const resourceName = resource.Name!.replace(/-[a-f0-9]{7}$/, "");
  const resourceId = resource.Name;
  const bucketLocation = await clients.s3.getBucketLocation({ Bucket: resourceId! }).promise();
  const bucketRegion = bucketLocation.LocationConstraint || "us-east-1";
  if (bucketRegion === aws.config.region) {
    service.resources[resourceName] = aws.s3.Bucket.get(resourceName, resourceId);
  }
}));
125 Views