https://pulumi.com logo
Title
a

abundant-rain-94621

01/13/2023, 6:15 AM
Hello everyone, I am looking for some resources to set up a CI/CD pipeline for pulumi. I am using the GO SDK and have something that resembles the following. 1. Two seperate stacks under Project A, main & test. 2. Separate teams within the organization, Automation (CI/CD) with permission to write to main and test, and a Developer team with permission to create stacks (like feature/new_s3). 3. Three github actions a. PR to test branch from development branch i. job one - run pulumi pre on test stack. ii. job two - if job one succeeds, then pulumi up on test stack iii. job three - after job one and job two run (regardless of success) pulumi destroy test stack b. PR main branch from test branch i. job one - run pulumi pre on main stack. c. Merge to main i. job one - run pulumi up on main stack I though I had this kind of working but the problem is that the stacks reference the same resources so my delete on test stack will delete all my main which is not desirable. How best should I work around structuring this, should I potentially be passing an arg to my code at the go build stage that on the test branch that amends a tmp string to all my resources so the github action (pulumi/actions@v3) could be amended to something custom? Thanks!
m

miniature-musician-31262

01/13/2023, 6:54 AM
the problem is that the stacks reference the same resources so my delete on test stack will delete all my main which is not desirable.
I’m wondering how this is so (specifically multiple stacks operating on the same resources); it sounds unusual. Are you able to share any code from your program?
a

abundant-rain-94621

01/13/2023, 9:14 AM
I cant share code at the moment, but are you able to confirm that deploying to two different stacks under the same project wont actually overwrite one another? If so then Ill re trace my code again and search for potential double ups. ie They should have different URNs. edit it may also be the requirement that backet IDs need to be unique.
m

miniature-musician-31262

01/13/2023, 5:12 PM
Absolutely — assuming you’re using Pulumi in the usual way. A stack is a collection of deployed resources. In order to destroy an already-deployed resource in a different stack, you’d have to “trick” Pulumi into thinking the second stack owned the resource already deployed in the first. Given a program, say, that deploys an S3 bucket, a successful deployment of the
main
stack would record that the bucket belonged to
main
. If you then switched to the
test
stack and ran
pulumi up
, Pulumi would recognize that the
test
stack didn’t yet have a bucket, so it’d create a new one for that stack. By default, Pulumi “auto-names” all resources uniquely by stack, meaning the actual bucket deployed for
main
would be different (e.g.,
bucket-abc
) from one deployed for
test
(e.g.,
bucket-123
). You could, however, “hard-code” a bucket name into your program such that deployments of
main
and
test
would try to use the same bucket name — but even if you did that (which is generally not recommended, but even so), the initial deployment of
test
, following a successful deployment of
main
, would fail because the bucket deployed by
main
would already exist at the cloud provider (so the second creation attempt would be rejected) — and running
destroy
on a not-yet-deployed stack would have no effect (even if the bucket name were hard-coded and already deployed in
main
) because Pulumi would not have recorded that bucket as belonging to
test
, as it wouldn’t have been deployed in that stack. So yes, in actual practice, it’s quite difficult to do this. You’d likely have to perform some sort of manual surgery on your state file somehow, and manually overwrite the state of the
test
stack, in order to do it. Definitely wouldn’t be an easy thing to do by accident.
Hopefully that makes sense!
a

abundant-rain-94621

01/15/2023, 2:58 AM
Awesome, thanks for the detailed response. Yes the issue was definitely naming consistency between the two different stacks which was causing the issue. What I have ended up doing instead was pulling in the environment variable from EnvStack and injecting that into named resources where I need some consistency and letting the auto name work for everything else.