Then, we would like to set up cross-account roles/...
# aws
p
Then, we would like to set up cross-account roles/permissions so that the pulumi dev backend state is saved in an S3 bucket in the root account, so that developers in the dev account can't accidentally alter the state. Any tips or resources are appreciated!
l
This won't work as a block to making updates; it will only stop people from updating the shared understanding of what has been deployed. There's nothing stopping someone deploying from code: the state isn't a lock, it's a cache. To stop people making updates, ensure that they don't have access to the resources that the code describes. That is: use AWS/Azure/GCP/etc. controls to protect your resources.
p
Sorry, I may not have been 100% clear. The idea is that we have: • an s3 pulumi state bucket "pulumi_state_root" that only few selected have access to which manages groups and fundamental resources • an s3 pulumi state bucket "pulumi_state" that more developers can read/write to to cache the state of any IaC created Hence one pulumi backend would be strongly protected, the other one less so. But via AWS prefix-based permissions, we could manage the read/write access to different stacks for different user groups differently?
l
State is stored per project. You can put different projects' state in different files in one bucket, or in different buckets. However, no matter where you put the state or how protected you make it, it does not protect your in-cloud resources at all. Anyone with access to the resources can change those. And anyone with no access to the resources cannot use Pulumi to change the state describing those resources, since the cloud update will fail and therefore Pulumi won't even try to update the state. Obviously I don't know your use case, but at a glance, it seems like what you're trying to do cannot be done with the solution you're currently investigating.
p
OK, thanks for the feedback!
Our current setup looks as follows: • two aws accounts: root & other • SSO login to the accounts • pulumi login with AWS_PROFILE • thus, one can only deploy to root if logged in to that account I guess all you mean is that one could log in to e.g. the root account and then
pulumi up
without
pulumi login
such that the state is tracked locally and so the s3 state file is not suitable for protecting against any one with access to root to make any infrastructure changes?
m
As tenwit said, the state is just a cache. If you want to prevent people from making changes to your infrastructure, you have to protect the infrastructure, not the state files.
If you have infrastructure that should only be changed by a select group of users, make sure that only these users have the necessary permissions to do so.
That said, it's a good idea to make sure that state files can only be altered by those who can also change the infrastructure tracked in these files, so that the two don't get out of sync. But that's not protecting your infrastructure, it's just protecting the integrity of the state files and prevents people from running into errors during "pulumi up."
p
If you have infrastructure that should only be changed by a select group of users, make sure that only these users have the necessary permissions to do so.
Yes, our thought was that AWS control tower, permission group definitions and s3 state files should be created in root account, such that only users that have access to the root account can alter them. Any other resources would be in another account and could thus be altered by those that have access to that account. However, that means that users need cross-account permission to the s3 state bucket (or at least parts of it) that is in the root account.
m
Yes, if you want to store all state files in the same bucket, you need to give the users access to that bucket
p
If you have infrastructure that should only be changed by a select group of users, make sure that only these users have the necessary permissions to do so.
How would you do that? I mean, one could go crazy and define for each group which kind of AWS resources they are allowed to alter? Or the other extreme would be that any user can do anything for a respective account?
PS: Maybe I am asking the wrong question, feel free to correct!
m
Via AWS IAM. You create deployment roles and scope them as required. If I'm only allowed to change S3 buckets called
kilian-*
restrict my S3 permissions to these buckets
PS: Maybe I am asking the wrong question, feel free to correct!
I don't think you're asking the wrong questions but you seem to be solving a problem that you might not have in the first place
Not sure how much you can share but what's your use case/concern? It seems like you want to create arbitrary infrastructure (otherwise restricting the permissions would not be a big deal) but at the same time want to heavily restrict what can be changed?
Maybe you also don't need to centralize the Pulumi state in a single bucket? You only have to do that when you want to have stack references, i.e., an "unprotected" stack references a "protected" stack
Otherwise, you can store the state of your "root" resources in the root account (even if the AWS resources are in different AWS accounts) and store the state of your account-level resources in the "unprotected" AWS accounts
tl;dr We might have an XY Problem here
p
Yeah 😄
Our concern was that if the state bucket is in the same account in which users happily pulumi up and destroy, the state bucket may accidentally be deleted or altered, causing huge pain
so we wanted to protect it by moving it into the root account, so it can only be altered via pulumi indirectly and can not be deleted "accidentally" by most developers
so now we have a state bucket that is versioned and encrypted and could only be deleted by root account users
then, we may limit the access of users so they are not able to alter stacks that have "prod" in the name? To make sure that e.g. only CI pipelines can "deploy to production" after PR review?
but as you mentioned earlier, users could still do that, if they don't
pulumi login
to the respective backend while still being logged in to the respective AWS account?
l
> Our concern was that if the state bucket is in the same account in which users happily pulumi up and destroy, the state bucket may accidentally be deleted or altered, causing huge pain This isn't a concern, with appropriate architecture. Your IaC infrastructure effectively cannot be dependent on (the same) IaC. For example, if you want account abc123 to be completely controlled by IaC, then your IaC state should not be in account abc123. This is the same concern as your logging bucket: you should not put your logging buckets in the accounts containing the resources being logged. Just as you should have an account reserved for logging / monitoring, which lasts long after all your other accounts are deleted (so that you can report to your auditors what happened in the past), you should have a state storage area for your IaC state files that will last long after any and all states are deleted. For most of us, that state storage area is the Pulumi app: after we delete all our AWS accounts, our Pulumi account will remain, with our auditable info. If you're storing your state in S3, then that S3 bucket should be in an account that Pulumi does not manage: it just writes files there.
In the past, I have used a small Pulumi project to create the infrastructure required by all my other Pulumi projects. I realized that this was a real chicken-egg problem, and solving it "cleanly" (or more precisely, in a way I could explain to my CTO and auditors) involved completely breaking the cycle. This was one (of the many!) reasons we abandoned self-managed state and switched to the Pulumi service.
p
OK, thanks a lot for the additional explanation!
So, in your experience, it makes especially sense to switch to Pulumi Cloud when being confronted with auditing etc. because otherwise it creates huge efforts to make sure all requirements are fulfilled? And early on it makes sense to start with Pulumi Cloud because it requires less setup. So is it ever reasonable, not to use pulumi cloud? 😄 We are a bit worried about increasing costs and "being locked in" / not having "full control".
l
No, the effort is small. It just shouldn't be solved with Pulumi. Create your bucket for keeping state, but do it manually, or via CFM or something. Put it somewhere "global", maybe in your existing logging account. Put all your state files in the one bucket (to reduce overhead), just change the prefix for each one as appropriate. Re: being locked in: lock-in to the Pulumi service isn't a thing. You can export your state at any time no matter what the backend is, you can copy that that anywhere (e.g. to S3), and then all you need to do is log each project into that backend, and you've moved.
> And early on it makes sense to start with Pulumi Cloud because it requires less setup. This one, 100%. You can conflate state management issues with regular infrastructure management issues, and end up with spaghetti. When you're using (for example) AWS for two unrelated purposes (state management + your business) without realizing that they are completely unrelated purposes, then you're just making like harder for yourself.
Initially, do your best to ignore state management. Let Pulumi do it, it's what they do. You work on your business' infrastructure. Once your completely au fait with that, then you can start looking at managing your own state: you can be confident knowing that any problems that pop up during those changes are not related to your business' infrastructure.
p
Fair points!
We will discuss again with the team
At this point it seems that we have already set up everything sufficiently to just continue with the self managed setup
However, if we encounter further challenges, switching to pulumi cloud always is a quick solution, as we can simply export and import the state 🙂