is there a good way to work with pulumi and multip...
# general
s
is there a good way to work with pulumi and multiple stacks (i.e. a dev, test, staging, prod) where you can promote between them? It seems that since the current stack is denoted by a Pulumi.{stackname}.yaml there isn't a good way to version the configuration embedded within that stack yaml, and you end up accruing merge commits as you push changes to the next stack
b
hey, it's not the case that the "current stack" is denoted by a
Pulumi.stackname.yaml
- there should be one of those for every stack you have, and you should check that into your version control
s
yes, but isn't the best way to version things between stacks by using branches?
and what about versioning the contents of those files (i.e. configuration)
s
yes, but isn't the best way to version things between stacks by using branches?
Not necessarily. For example, you might have persistent dev, staging, and production EKS clusters. In that case you might want to have 3 Pulumi stacks for a single Project, with the
main
branch as your source of truth. If you want to test changes against only
dev
you can modify your code/configuration in a way that only impacts dev on a
pulumi up
s
Is there a good blog post or example project showing how to do this? Mainly i'm trying to deploy a full stack from AWS Resources / EKS through the helm releases, and i've got the helm chart version numbers stored in the stack yaml. For simple version upgrades, i'd like to promote the new chart versions (and any other values changes) through the dev/test/prod process. The git branch/pr process mirrors my ideal workflow, but since the version numbers are inside the stack yaml right now, i can't trivially merge them between branches since each stack yaml has a different name and may contain different other details (node sizing, domain names, etc)
s
hm can you elaborate on your workflow? you have a repo with a helm chart and three branches, dev, test, and prod. When the deployment of the helm chart and tests pass on dev you want to deploy that helm chart to test. when tests on test passes you want to deploy to prod. Is that right? If so, am I missing anything important?
s
yes. ideally i want things to cleanly merge between branches without the need of a merge commit (due to changed files in different branches). The PR lets us review the changes fairly easily to quickly understand scope of changes.
s
Here's what I suggest. Your codebase should have the yaml files for all 3 stacks: dev, test, prod. Your repo should have
dev
as the default branch. Any new features are first merged into
dev
. You can reference the helm chart by path rather than a version specification (so you don't need to worry about updating a version number - the code on the persistent git branch will be a full description of your infra). Marge into
dev
kicks off a deploy and test. If things are good then PR from
dev -> test
. If things are good after merge then PR from
test -> prod
. What do you think?
👍 1
b
you could also get the chart version of of the stack config and use
pulumi up -c chartVersion 1.0
and then set chartVersion inside the pipeline instead
s
@billowy-army-68599 - its like 10-15 chart versions... this ideally is all of our external infrastructure that sits around our core applications (prometheus, various operatators, etc)
@steep-toddler-94095 - so i think i like what your suggesting... have all three yamls in each branch, but each branch CI only activates the necessary stack config. I can potentially externalize the chart versions into a separate yaml i just read also... so the only items in the stack yaml is just the environment specific things like DNS, node sizing/quantities/etc.
s
yeah pretty much! side point about the stack config, i personally try to minimize my usage of it. i only use the stack configs for secrets, default providers, and a single
environment
value, which I use to choose the "actual" config in code. This allows me to do much more, like statically-type my config and pull values from other code or non-code files. though if you won't benefit from those things then no need to do this
s
ideally i'm hoping to use kubernetes secrets to store and retrieve my secrets as needed. I haven't gotten to that part yet, but my goal is wiring everything so that the secrets are generated by a kubernetes job, and everything else just references them at runtime. This lets us trivially roll secrets as we don't have to distribute secrets to multiple places.
s
yup it's totally valid to not have Pulumi manage secrets too!