Hey all, I’m thinking about migrating my personal ...
# getting-started
p
Hey all, I’m thinking about migrating my personal side projects setup to Pulumi. As it stands today, I configure everything in one giant Ansible playbook (https://github.com/banool/server-setup). This playbook communicates with my personal home server to open ports, install packages, start DBs, copy across config files, run containers, setup timers, you name it. I want to split up each of these distinct apps. I’ve got a couple of questions: 1. Should I write a separate Pulumi program per app I’m working with? 2. Pulumi seems to encourage you to write programs that target one particular provider. Is there a more agnostic way to approach this? Perhaps Kubernetes? Ideally I’d learn Kubernetes but I’m not sure if it’s necessary here / I feel like using it might throw out some of the power of Pulumi? 3. When moving from one server to cloud, I imagine I’d switch to hosting my containers with something like ECS, spin up a DB using a native provider DB service, etc. Is it possible to write a program that will work for both this new setup as well as my existing “everything on one box setup”, or am I dreaming? Any tips on how to get started would be much appreciated!
l
1. No, the most common convention is to write one project per deployment cadence. If things get deployed at the same time, then they should be in the same project. 2. Pulumi doesn't encourage this. Hopefully that's just the way examples are written? 3. The everyone-on-one-box and Ansible-replacement ideas aren't really what Pulumi is for. Its main purpose is to deploy infrastructure: installing applications afterwards can be done using Pulumi, but it's not the primary goal. So you'll probably encounter some awkwardness getting this working.
There isn't an Ansible provider: to replace what you do in Ansible with Pulumi code, you'll be writing a lot of shell scripts, local.Command resources, and things like that. You'll have two projects (one to deploy your cloud resources, and a different one to install your app, which you can target at either the cloud resources or the local always-existing single machine).
p
1. I should say that the only reason these apps are deployed together right now is because they run on the same machine and sit behind the same reverse proxy. With Pulumi I expect them to be totally independent. 2. I figure there is still work involved when switching from one provider to another though? In which case if I’m worried about that, I should just use Kubernetes, which could be running anywhere? 3. Makes sense, as well as with the ansible stuff. I figure it’s Pulumi first, Ansible next.
l
2. Nope, you just set up your various provider objects (e.g. AWS us-west-2, GitHub, Docker, Slack etc.); Pulumi figures out which provider to use based on the type of the resource you create. You can even have multiple providers of the same type (e.g. AWS us-west-2 + AWS eu-west-1) and pass in the appropriate one to individual resources.
p
That’s sort of my question though, I need to change what resources I’m using, which involves figuring out how to use pulumi_aws vs pulumi_azure for example, vs. just using pulumi_kubernetes and then any change in provider is handled completely transparently from that perspective
l
Only from an application deployment point of view.. which is not what AWS or Azure are for. Unless you plan to deploy k8s once and manually, you'll need either AWS or Azure or GCP, won't you?
All the important infra (firewalls, DBs, user pools, backups, NACLs, SGs, and of course k8s itself) are deployed to AWS/Azure/GCP.
k8s only helps with the last few steps in the journey.
p
I’m thinking from a pulumi program point of view.
l
Me too.
If you're writing IaC (Pulumi) to deploy to k8s, then you (presumably) need to write the IaC to deploy k8s too.
p
I’m referring to this page: https://www.pulumi.com/docs/get-started/. It asks you to select a provider. If I select AWS I end up writing my program with a bunch of aws specific stuff, vs generic k8s, which could then be pointed at google / aws / whoever managed kubernetes
Ah I see your point, you’re referring to the trade off between writing the cloud provider native iac vs the k8s iac + the iac to actually deploy that k8s setup
💯 1
My assumption is the latter is probably less overhead in the long run and introduces less provider lock in though, wouldn’t you agree?
l
IMO, you don't get much (any?) benefit from k8s if you're using it as a deployment facilitator. The benefit comes mostly from redundancy / recovery / it-can-look-after-itself-ness.
You won't reduce deployment lock-in, because in order to use (for example) EKS, you still need to use ECR, Docker, AWS EC2 (for the VPC / SGs), S3 (to deploy packages to build the images), and loads more.
Well, you might reduce lock-in for one small part of the process (the app deployment part), but that really will be a small part.
And you might get the same benefit from going with plain-old Docker!
p
Alright appreciate all the help, I think in that case I’ll just pick a cloud provider to start with, I don’t have a need for most of the Kubernetes stuff right now. The insight on being locked in regardless is helpful
👍 1
l
By the time you've set up a minimal VPC + firewalls to put either EKS or EC2 in, you'll have enough expertise to choose between them 🙂
🙌 1