Random question… where do you run `pulumi`? Like…...
# general
c
Random question… where do you run
pulumi
? Like… ok, I run it locally on my machine when I’m trying it out and getting things started up… but day-to-day for CI/CD stuff, presumably we’d kick off a job or a codebuild or a lambda function or something, right?
s
You can integrate Pulumi with first class support for many CI/CDs pipelines https://www.pulumi.com/docs/iac/using-pulumi/continuous-delivery/ Including our own Pulumi Deployments https://www.pulumi.com/docs/pulumi-cloud/deployments/ Eg. You can get a preview of your infra changes on a Github Pull Request or even use it to setup a review environment
a
We use
Pulumi CLI
directly in Jenkins shell scripts, and
pulumi/actions@v6
in GH actions.
l
You know you've done something right when you can run Pulumi inside an isolated subnet and still see the results in a Bitbucket pipeline 🙂 Pulumi runs wherever's best for your requirements.
l
I developed a clone of Terraform Atlantis, with a simplistic gui for managing / monitoring deployments and state-files. We use BitBucket pipelines, so I built a BitBucket Pipe that runs in an ECS container. It was a bear to get working tbh, but is reasonably stable now. Would have been an order of magnitude easier with GitHub Actions (but of course with the increased risk of supply chain attacks)
c
@lively-translator-16002 That sounds a bit familiar to what I’ve been pondering… I’ve got my foundation stuff working (mostly?) i.e. the infrastructure for our infrastructure. That includes the ECR repos to store docker images, s3 buckets to store zips for lambdas, pipelines to build AMIs for EC2 deployments, and some other zips for overlaying files into other servers…. So that means I basically have ALL THE INGREDIENTS for our regular stacks so we could have a lock file or manifest that would declare
Copy code
some_docker_service: ECR-digest-123
some_lambda: <s3://bucket/something.zip>
some_ec2: ami-123
...etc...
A lot of that is gonna be just the regular Pulumi config… i.e. that pretty much IS the manifest. But… this is where things start to get trickier… Multi-region: when we are deploying to multiple regions, we want to specify the strategy. Usually this would be a “dark-region-deploy” where we drain traffic out of one region, deploy the code there, bring traffic back up and wait… if everything looks good, then drain traffic from the next region and then deploy there, etc. Env vars: sometimes a deploy is just a change in the ENV vars… I’ve started to look into AWS AppConfig because I think it would allow us to create sets of these that could be referenced in the same way as ECR digests etc. Configuration overlays: sometimes a deploy means rsync (or similar) a set of files onto an EC2 server and restarting a service. I don’t want to reinvent wheels and end up with a custom thing that we need to maintain (but I certainly understand that sometimes you have to build the thing you need)…. some of pulumi’s other services seem like they could handle this stuff… but I need to read up on them more.
l
You could also look into CodeDeploy or SSM for part of this, but as usual with AWS, the work is front-loaded and you'll find yourself mid-project wondering if you made the wrong choice.
How about using SSM parameter store and/or secrets manager for sourcing those Env vars, or something third-party like Cloudtruth or Vault (Or ESC?)
I only built a custom solution because $employee is quite averse to paying for anything
c
yeah… it’s a lot of front-loading… The env-var “deploys” are the easier of the 2 I think… but I gotta figure out how to restart services that only read them at “runtime” (i.e. when the service starts). rsync’ing a bunch of files is trickier… will still need to bounce a service. because pulumi is code… I’m wondering if I could trigger actions like that somehow
l
You should be able to with the automation API. I use it for two places primarily: Buiding chains of docker containers, and kicking off deployments when linked services have changes, like when database secrets rotate.
👀 1
m
@crooked-ram-3551 me as K8s fanboy: https://github.com/pulumi/pulumi-kubernetes-operator with Argo CD and kro (https://github.com/kubernetes-sigs/kro)
1
c
Thanks @many-telephone-49025 — although I admit I have not gotten into k8. For technical reasons, a lot of our current services cannot be containerized, so I haven’t had an excuse to check it out.