This message was deleted.
# aws
s
This message was deleted.
b
hey! so, this is a bit of a side effect of the way backends are implemented
the aws provider and the mechanism for accessing the S3 buckets are actually implemented separately, one lives in the provider binaries, one lives in the Pulumi CLI. The stack configuration looks correct, but in order to access the s3 bucket for your state storage, pulumi never uses the provider credentials
what you'll need to do is set the access key and secret access key in your
.env
file, and also set them in your stack config. You can specify secret values for the secret access key and encrypt them using the
--secret
flag
๐Ÿ‘Œ 1
right now, you have the access key in your stack config, and the secret access key in the stack config, which is totally fine, but you'll need to completely separate them
w
Thanks for the really fast reply, thatโ€™s very helpful! Iโ€™m just trying this out now
b
i'll be in a workshop for the next couple hours, if you have any issues after that I'm happy to jump on a call to help out later today
๐Ÿ™ 1
w
I seem to have it working well now, thank you! In my case, Pulumi seems to be ignoring the
.env
altogether, and I made this work by specifying the bastion creds (for S3 backend) in
~/.aws/credentials
. If I unset those and instead provide the creds in
.env
in the project directory,
pulumi up
fails with:
Copy code
error: failed to load checkpoint: blob (key ".pulumi/stacks/development.json") (code=Unknown): NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I assumed Pulumi would automatically pull in the env vars from
.env
, but it seems Iโ€™m missing a step to get Pulumi to see them. In the end, I removed the
.env
file altogether, and put the dev account
aws:accessKey
and
aws:secretKey
in the stack config, and Pulumi does the right thing now. Iโ€™m happy for Pulumi to use the bastion creds in the aws credentials file to access the backend & secrets provider.
v
I found this answer while searching for a way to set the default provider when bringing up a stack and I've got some thoughts about this approach.
I would really like to see a way to configure the default provider at stack apply time because I think this approach, putting credentials in the stack configuration, isn't a very good solution.
The problem with doing it this way is that each stack is coupled to a key pair, which will eventually need to be rotated on every single stack, in each account they get deployed to. It is much more secure and flexible to use role-based access with temporary credentials issued by STS
Thoughts?
b
hey @victorious-art-92103 you can absolutely use credentials via STS, you'll just need to set them as environment variables, instead of inside the stack config
in this particular case, the account that was deploying resources and the account that the state was stored in was different
you can even assume role within the provider, it's completely flexible to your needs
v
Right, we're doing that now via AWS_PROFILE to use STS but that's not really what I mean. ๐Ÿ™‚ I'll try for a better explanation: rather than forcing the user to pass a provider explicitly to every resource that needs a different configuration, it would be helpful to set or reset the default provider based on criteria I dictate in the configuration so that it applies to all the resources being created thereafter. Something akin to a Python context manager where all the resources created inside the context use the configured provider would be super.
SomeResource('name', args, { provider: ... })
over and over again is no fun ๐Ÿ™‚
b
do you use different providers inside the same stack? i'd love to try and track this, would you mind opening a github issue in pulumi/pulumi
v
we do, one use case is delegating route 53 zones across accounts
one account hosts the "root" zone and other stacks create NS records in the "root" zone to then manage records in a separate account
I'll open an issue and link it here ๐Ÿ™‚
b
thank you!