For folks who are using S3 state storage and deplo...
# aws
a
For folks who are using S3 state storage and deploying to multiple accounts, how are you dealing with permissions boundaries for the entities executing the stacks?
b
it's a tough question to solve. You could use our SaaS 😉 but generally i recommend having a dedicated "infrastructure" account that stores states and host your bucket there
a
Makes sense (both answers 🙂 )
b
is our SaaS an option?
l
Explicit AWS providers in code make this a lot easier to manage. Your default AWS creds (env vars) are used only for state storage; all Pulumi resources are created via explicit providers. Reduces risk of creating stuff in wrong account.
🙌 1
c
I never mix different AWS accounts in a single stack. When I want to share a bucket, I define it in a stack that's applied on the AWS account which holds the bucket, and I set a bucket policy that whitelist my org or part of it depending on my needs. Same as @billowy-army-68599: I have some kind of "infrastructure / shared services" accounts.
l
Unfortunately it isn't always possible.. for example, setting up VPC peering requires access to 2 accounts. You could do it via 2 stacks, but then you'd have to write some state-detection code. It's much easier to use two providers to solve this sort of issue.