i'm starting to think about ci/cd with pulumi. I'v...
# general
c
i'm starting to think about ci/cd with pulumi. I've got a monorepo of pulumi projects with multiple stacks for most projects. I'm thinking I need a tool to figure out which projects have been changed, run preview/up on each one of those projects and also a way of specifying the order in which the preview/ups are applied. Are there any existing open source tools that do this? Would love to hear from others who have gone down this path and how they've done it.
c
Depending on which CI/CD platform you are using you should be able to create triggers on when pipelines are completed.
We are using AzureDevOps Pipelines, but we are in the POC phases. Our build phase essentially packages lambda etc, then the deploy runs the up
b
hey! which CI/CD tool are you planning to use?
c
@calm-parrot-72437 We’re in a similar situation. We developed a basic tool that compares all the changes from the last successful build and then executes them all in their alphabetical order. It’s not necessarily the nicest code, but it works. It’s not open source at the moment, but it could be.
c
I'll be using gitlab
I think i need a little more than alphabetical order... if there are outputs of one project that another one needs, i need to deploy the one with the new outputs first, no?
I could split out the repos and sort of force this, like I currently do this manually... but I'm not sure I want to do that so early.
c
We haven’t run into such an issue at the moment. We’d be happy to accept a PR if we open sourced it to add such support.
To need the manual order that is.
c
do you keep your commits to one project at at time? @cool-egg-852
c
Not generally. But we also don’t generally make changes that depend upon each other.
Each one of our projects is generally pretty self-contained
c
sometimes i wonder if I'm doing this right 😉
c
I’d be happy to discuss our architecture a bit with you on a private call. If you are interested, DM me.
c
i'm on kubernetes and i'll keep the Deployment/service, etc in one project. but if hte project relies on other infrastructure, say s3 buckets, a filesystem, database, etc., those will be done elsewhere
c
That should all be in one project generally.
Kubernetes itself is obviously separate, but if you have
testapp1
, and it needs a DB, s3 bucket, etc., that would all go in the
testapp1
pulumi project.
c
well, they might be shared between projects, so which one would it be deployed in
c
We always pick an “owner” application.
But we also try to avoid sharing resources between projects.
For example, all of our application communication must happen over API or queue of some sort. We don’t have more than 1 app per database. It does make this a bit easier.
c
but i'd think you'd still have dependencies, to deploy my kubenetes applications i'm reliant on the outputs from the project that brings up the cluster, for example. this doesn't change, but being able to model the dependencies appropriately means i could bring up a new environment quickly. without doing, i'd still have manual work to do.
c
We separate kubernetes itself into a separate project.
We are using GCP, so our GCP networking is a separate project as well, and we use a StackReference for that.
c
exactly and those become dependencies of your applications. you have to control the order of operations there, if perhaps only at the very beginning as the environments are created
c
But outside of these few specific changes, we don’t have a ton of dependencies. Our big one is just the API hostname that is used by other services. However that never changes, so we do this a little manually when we have a new service.
Yeah, it’s basically only at the beginning.
Because we never have new environments, we didn’t spend the time on a tool to handle it.
c
yeah, maybe i don't sweat the small stuff and leave the beginning mostly manual
c
That’s how we do it. I recommend documenting it, so if you do need to go through the process again, you have the order there.
As long as you rarely create a new environment, it may or not be worth the work to automate it.
c
yeah, i only have one working environment now and thinking about how to go to 1 more and then maintain consistency there.
c
We do a staging + production setup, both of which are nearly identical, with whatever changes being handle via configuration.
So as you decide to do that, look into moving more values into configuration so you don’t have to do a bunch of conditionals.
We keep a single branch though, so it does make it harder to test out changes. Generally we wrap conditionals around changes and make it configurable in the stack files.
It makes it harder to apply some changes, and others easier. You’ll have to figure out what is best for you on that front.
c
i see. yeah, i was thinking about using different branches.. but that doesn't map perfectly to how i'm doing things right now
some things i don't have staging versions of... like vpn for example.. so not exactly sure how to handle that cleanly with multiple branches. i guess it could be separated out from the stage branch and kept only on the master branch, for example.
c
For example, we’re switching away from istio to linkerd, so we have a
project:useLinkerd
bool in the
Pulumi.staging.yaml
file. If it’s true, we use the linkerd configuration, otherwise we use the
istio
configuration. This to us is easier to manage than separate branches.
Semi related: if you are using AWS or GCP, use a separate account/project per environment.
l
I’m a fan of ConcourseCI. Their
git
support allows for a
path
specification which results in only triggering builds when files within that(/those) path(s) are changed. https://concourse-ci.org/ https://github.com/concourse/git-resource It is also the only CI where pipelines can be modelled over multiple git repositories. See Concourse CI building its own codebase as an example: https://ci.concourse-ci.org/teams/main/pipelines/concourse
c
oh that is nice, Ringo. i've always had to build that in the past myself.