Hi, I'm evaluating Pulumi, coming from Terraform. ...
# general
b
Hi, I'm evaluating Pulumi, coming from Terraform. I'm looking for advice on how I could structure Pulumi projects/stacks based on what we currently have. Right now we use sort of a monolithic infrastructure repo, split by environment (e.g., dev, staging, prod). So for example if I've got a Node.js-based Elastic Beanstalk API application (we use AWS), we have a Terraform configuration at
dev/applications/api-content
,
staging/applications/api-content
, and
prod/applications/api-content
. Each of these directories have their own Terraform configs that might describe an S3 bucket or two, the EB app, an EB environment (or two), DBs, etc. Importantly, they're not all the same underlying infrastructure/resources. Initially I figured re-creating this setup could be done by having, e.g., an
api-content
project with
dev
,
staging
, and
prod
stacks. But since, for example, there might not be an S3 bucket in the
dev
stack whereas there is in
staging
and
prod
, it started to look like there would be a lot of if/else statements in my Pulumi source files. Do you think it might make more sense to create a project per environment, each with a single stack? That way I could ensure the Pulumi source is tailored specifically to each environment, and can still have something like a common project/stack that each environment pulls from for conventions/components shared between these environments.
b
Hi @broad-pencil-85643 I’m actually writing a blog post right now wrt this topic so stay tuned
c
I would say for the time being that yes, creating a project for each environment is probably the way to go if the environments are different infrastructure. But you will have some manual work to do to elevate changes. I imagine it’s similar to what you have to do with terraform. To alleviate some of that, what I would do is create common components between each of the environments into a library and version it accordingly so that you can roll out those updates by changing a version number. At least this is the easiest way I can imagine doing this.
a
@broad-dog-22463 I am also interesting to see how best to architect that, so looking forward to your blog post. So far, @broad-pencil-85643 I manage to just use
pulumi.runtime.getStack();
and that way detect running stack and execute required changes for that specific stack. You can see this approach here (WIP, lots of things require refactoring, but most important it works 😄): https://github.com/ever-co/gauzy-pulumi/blob/develop/index.ts
a
We also had a monolithic terraform config before pulumi, but we used terragrunt to help split up each part more. We decided to have a "central" project which has stacks for each environment to control shared resources
We think of stacks as for the different envs and projects as for each service or etc.
b
Thanks for all of the replies! I'm thinking it could be a good idea to re-shape how I think about my infrastructure as something like
dev
,
staging
,
prod
, and
shared
projects. That way everything that lives in the
shared
project would be almost identical, except for config-level changes between stacks. Any more extreme outliers between each environment could be managed in the appropriate environment's project. For example, if we need 3 Elastic Beanstalk environments in
dev
because we're testing 2 environments that might never make their way to
staging
or
prod
, we can describe the 1 shared environment in the
shared
project, and those other 2 outliers just stay in
dev
(and can be elevated up to
shared
if they make the cut). The one downside of this as opposed to creating very granular "micro-stacks" seems to be that any time I do
pulumi up
for a given project, it has a lot more heavy-lifting to do to determine changes (since it's acting on an entire vertical slice of our infrastructure vs. a single application's stack). But that's kind of a separate issue of monolithic vs. micro-stacks, it seems.