Are there any limits or practical edges as to the ...
# general
s
Are there any limits or practical edges as to the size of a stack? I'm looking at combining two stacks into one that would put us at about 160 resources. Who's got the biggest stack? 🙂
f
This is a great question. From a design perspective I'd recommend logically breaking stacks up as much as makes sense for your application. I.e. base-networking stack, front-end stack, back-end stack, microservice-a stack, etc and use stack references to get the necessary values from each. This allows for more targeted and "agile" infrastructure changes. Similar to why you may not want to have all your application code in a single monolithic repository. As far as limitations, I don't believe there is a hard limit on number of resources in a stack but there has been a few cases where larger stacks see some performance issues. I can't seem to find a github issue tracking it but if I do I'll link it. Out of curiosity, what about multiple stacks makes you want to combine them into a single stack?
c
Totally agree with what Dan says about using a “logical organization” of your stacks, and not worrying so much about size. I can confirm there is no hard limit on the size of a stack, but naturally there is more work to do for an update as the stack grows. As far as the largest stack, the stack powering www.pulumi.com might be the largest out there. It has thousands of resources (for each individual file stored on S3). And even at that size updates may take as little as one minute depending on what the actual changes that need to take place.
s
I've deployed with 825 resources, but I agree with Dan and Chris about breaking out your infra.
s
We've been working so far with project-based stacks. I had deployed one project in isolation and had no issues, but when it ran in our staging account I got an EIP limit error (5 is so low!) with multiple stacks consuming the same limits. It turns out I didn't need so many NATs, so the issue took care of itself, but when I had a deploy fail halfway through onto staging I was concerned that might happen in prod as well. I gather that instead of doing project-based stacks, organizing by layer makes sense. Within our projects we have things broken out by networking, compute, etc, so maybe organizing those into stacks is the right approach.
We're also looking at keeping infrastructure code and app code in separate repos, and having the fargate cluster reference a tag in ECR for deployments. Honestly this multi-project organization stuff is getting onerous for our small team to digest. I wouldn't mind throwing some bucks at a consulting arrangement to get our fundamentals squared away.
s
We ran into a bunch of AWS limits initially too, but everything has been bumped to accommodate our scale. We are a small team as well and still very new to Pulumi. If you want to talk some more feel free to DM me.
f
@swift-painter-31084 agree, breaking up those project stacks into layers would be a more ideal structure for management.
đź‘Ť 1
b
We're also looking at keeping infrastructure code and app code in separate repos, and having the fargate cluster reference a tag in ECR for deployments.
FWIW, from going down this road, I've found it a lot easier to have deploy-specific code live in the same repo as the app. We have it such that our CI builds a Pulumi container and an application container; our deploy step applies the contents of the Pulumi container with baseline secrets, etc., and passes in the desired tag of the app container for Fargate's consumption.