sparse-intern-71089
05/02/2022, 6:02 PMmillions-furniture-75402
05/02/2022, 6:06 PMbitter-horse-93353
05/02/2022, 7:33 PMpulumi/buckets/pulumi.yaml
that defines the bucket & outputs the bucket ID using pulumi.export
and then in myscript.py
I would do
stack = auto.create_or_select_stack(stack_name="dev", work_dir="pulumi/buckets/")
up_res = stack.up(on_output=print)
up_res.outputs[xxx].value # bucket ID
and the assumption here is that if the buckets all exist already & are configured in the environment the same as they are in pulumi.yaml
then the call to stack.up
will essentially no-op & just return the relevant data?bitter-horse-93353
05/02/2022, 7:33 PMmillions-furniture-75402
05/02/2022, 8:18 PMbitter-horse-93353
05/02/2022, 8:20 PM<s3://bucket>-{uuid}/
anywhere in my code, but instead always have pulumi.mybucket.id
which will always remain correct regardless of infra changesmillions-furniture-75402
05/02/2022, 8:21 PMbitter-horse-93353
05/02/2022, 8:22 PMmillions-furniture-75402
05/02/2022, 8:23 PMconst myBucket = new aws.s3.Bucket(`${appName}-bucket`, ...);
const lambdaFunctionApi = new aws.lambda.Function(
`${appName}-api`,
{
code: new pulumi.asset.FileArchive("./dist/app"),
memorySize: 128,
environment: {
variables: {
API_BASE_PATH: apiBasePath,
S3_BUCKET_NAME: myBucket.name.apply(v => v),
},
},
handler: "lambdaApiHandler.handler",
layers: [lambdaLayer.arn],
role: applicationRole.arn,
runtime: aws.lambda.NodeJS12dXRuntime,
timeout: 30,
vpcConfig: {
securityGroupIds: [appSecurityGroup.id, vpc.defaultSecurityGroupId],
subnetIds: privateSubnetIds,
},
},
{ dependsOn: [applicationRole, lambdaLayer] },
);