Hi all, I have a Pulumi project using a custom S3 ...
# aws
w
Hi all, I have a Pulumi project using a custom S3 backend on a shared bastion AWS account, but I want each stack to deploy resources on a different “child” AWS account (i.e. dev, staging, and prod each have their own AWS account). How can I configure my project/stacks/env to work with this setup?
Copy code
Project: infra 				// backend on aws-bastion
 |- Stack: development	    // deploys resources to aws-dev
 |- Stack: staging			// deploys resources to aws-staging
 |- Stack: production		// deploys resources to aws-prod
Copy code
# Pulumi.yaml
name: infra
runtime: nodejs
backend:
 url: <s3://my-pulumi-backend>
Copy code
# Pulumi.development.yaml
secretsprovider: <awskms://alias/development/pulumi-secrets-key?region=eu-west-2>
encryptedkey: …
config:
 aws:accessKey: <dev user access key ID>
 aws:allowedAccountIds:
 - <dev AWS account ID>
 aws:region: eu-west-2
Copy code
# .env
AWS_SECRET_ACCESS_KEY=<development user secret access key>
The problem is that Pulumi only looks for one set of AWS credentials for everything. So if I want my stack to use my dev access key in the program to build resources in my dev AWS account, Pulumi can’t access the backend S3 bucket on the bastion AWS account (and vice versa). I think I could create a custom AWS provider in code based on the stack name, but then wouldn’t I have to manually specify that custom provider on every individual resource in the program? Is there a way to change the default provider on a per-stack basis while still allowing the project to use an S3 backend on a different account?
b
hey! so, this is a bit of a side effect of the way backends are implemented
the aws provider and the mechanism for accessing the S3 buckets are actually implemented separately, one lives in the provider binaries, one lives in the Pulumi CLI. The stack configuration looks correct, but in order to access the s3 bucket for your state storage, pulumi never uses the provider credentials
what you'll need to do is set the access key and secret access key in your
.env
file, and also set them in your stack config. You can specify secret values for the secret access key and encrypt them using the
--secret
flag
👌 1
right now, you have the access key in your stack config, and the secret access key in the stack config, which is totally fine, but you'll need to completely separate them
w
Thanks for the really fast reply, that’s very helpful! I’m just trying this out now
b
i'll be in a workshop for the next couple hours, if you have any issues after that I'm happy to jump on a call to help out later today
🙏 1
w
I seem to have it working well now, thank you! In my case, Pulumi seems to be ignoring the
.env
altogether, and I made this work by specifying the bastion creds (for S3 backend) in
~/.aws/credentials
. If I unset those and instead provide the creds in
.env
in the project directory,
pulumi up
fails with:
Copy code
error: failed to load checkpoint: blob (key ".pulumi/stacks/development.json") (code=Unknown): NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I assumed Pulumi would automatically pull in the env vars from
.env
, but it seems I’m missing a step to get Pulumi to see them. In the end, I removed the
.env
file altogether, and put the dev account
aws:accessKey
and
aws:secretKey
in the stack config, and Pulumi does the right thing now. I’m happy for Pulumi to use the bastion creds in the aws credentials file to access the backend & secrets provider.
v
I found this answer while searching for a way to set the default provider when bringing up a stack and I've got some thoughts about this approach.
I would really like to see a way to configure the default provider at stack apply time because I think this approach, putting credentials in the stack configuration, isn't a very good solution.
The problem with doing it this way is that each stack is coupled to a key pair, which will eventually need to be rotated on every single stack, in each account they get deployed to. It is much more secure and flexible to use role-based access with temporary credentials issued by STS
Thoughts?
b
hey @victorious-art-92103 you can absolutely use credentials via STS, you'll just need to set them as environment variables, instead of inside the stack config
in this particular case, the account that was deploying resources and the account that the state was stored in was different
you can even assume role within the provider, it's completely flexible to your needs
v
Right, we're doing that now via AWS_PROFILE to use STS but that's not really what I mean. 🙂 I'll try for a better explanation: rather than forcing the user to pass a provider explicitly to every resource that needs a different configuration, it would be helpful to set or reset the default provider based on criteria I dictate in the configuration so that it applies to all the resources being created thereafter. Something akin to a Python context manager where all the resources created inside the context use the configured provider would be super.
SomeResource('name', args, { provider: ... })
over and over again is no fun 🙂
b
do you use different providers inside the same stack? i'd love to try and track this, would you mind opening a github issue in pulumi/pulumi
v
we do, one use case is delegating route 53 zones across accounts
one account hosts the "root" zone and other stacks create NS records in the "root" zone to then manage records in a separate account
I'll open an issue and link it here 🙂
b
thank you!