Hi all, I'm using S3 as my backend and I specify ...
# getting-started
w
Hi all, I'm using S3 as my backend and I specify the s3 path in
Pulumi.yaml
I want the backend s3 bucket to be different for each environment. How do I do it? Initially I wanted to use the same bucket name in each environment but then remembered that bucket names are global.
Copy code
name: pulumi_backend
stackConfigDir: ./config
runtime:
  name: python
description: S3 backend for self-hosted Pulumi
backend:
  url: <s3://fd-pulumi-backend>
basically this needs to be different for each stack (
qa
,
prod
, etc.):
Copy code
backend:
  url: <s3://fd-pulumi-backend>
c
You should not set the backend URL in the project file (
Pulumi.yaml
). Instead, simply do a
pulumi login s3://<bucket>
and then run
pulumi stack init <stack name>
. This way, each stack is initialized in a different bucket.
l
Did you know that you can put a path in the backend? That would allow you to use one bucket for all your state info, but keep each state separate within that bucket.
c
True. Unless you want to restrict access to stacks by the accounts in which the buckets are.
l
The backend account / creds should be separate from the stack account / creds. There's no need to have any relationship between state infra and stack infra.
c
I agree with you. But if you want to restrict access to your prod stack's state (simulate Pulumi Cloud's RBAC), you may want to have that in a separate bucket. I am sure there are other ways to manage access to objects in the same bucket though. For instance, you could also have an IAM policy restricting certain path prefixes I think.
w
Thanks! how do we define the backend in the stack file? I want to avoid running
pulumi login
every time I need access to a specific backend...
l
Just do it once, see what gets put into the stack file, and copy/edit accordingly? I'm not sure it even goes in the stack file, it might go in the profile in ~/.pulumi. Praneet probably can confirm.
c
Yeah I don't believe the backend URL goes in the stack config file at all.
w
when you do login, nothing goes to the stack file
so if it doesn't go to the stack file, how can we have a separate s3 backend for each stack?
i think in this case, the only option is to run
login
c
If the experience of
pulumi login
is not up to your liking for switching between stacks (which can be annoying) you should do what @little-cartoon-10569 is suggesting.
Remember that
pulumi login
is global.
So as long as you are logged into a specific backend, any stack you initialize would be stored in that backend.
w
not sure I'm following @little-cartoon-10569’s method... any info in the docs?
that's why I'm not a fan of using
pulumi login
... receipe for a disaster
l
See :https://www.pulumi.com/docs/concepts/state/
The bucket-name value can include multiple folders, such as my-bucket/app/project1. This is useful when storing multiple projects’ state in the same bucket.
w
Ahh, I can't use the same bucket for all projects because each project is for a separate AWS account
l
I recommend treating state infra as if the only user who can ever access it is your CD pipeline user. This makes life much easier. It does move access control to business processes, which can concern some people, but I have never found a case in which the extra effort to duplicate app.pulumi.com's features have been worth the time invested.
If you do not trust your infra devs to deploy your infra, or if business process requires the pipeline to do the deployment, then just don't allow the devs to access the state. And allow the pipeline to access all states.
The benefits are increased simplicity, and increased centralization of things like state change auditing.
If you do trust your devs, then allow them the same access as the pipeline.
The only case you cannot support is that you allow dev to deploy to some stacks and not to others. And that can be easily supported by requiring the devs to
pulumi login
.
Ahh, I can't use the same bucket for all projects because each project is for a separate AWS account
This isn't correct. The state does not need to be in the same account, and indeed, I strongly recommend that it is not in the same account.
You should consider having an account for infra management. Probably the account you use for your audit log buckets would be a good option, else use a whole new account.
w
ahhh I think I get what you mean....
let me try to follow this
l
You can put the special backend creds in the backend URL (or pass to pulumi login):
As of Pulumi CLI v3.33.1, instead of specifying the AWS Profile, add awssdk=v2 along with the region and profile to the query string. The URL should be quoted to escape the shell operator &, and used as follows:
pulumi login 's3://<bucket-name>?region=us-east-1&awssdk=v2&profile=<profile-name>'
c
(side note: I actually wish the backend URL was stored in the stack config. Both for self-managed as well as for Pulumi Cloud-managed stacks. For the latter it would be great if the stack config file showed the username or org under which the stack exists.)