Does anyone understand pulumi memory/CPU usage? I ...
# general
s
Does anyone understand pulumi memory/CPU usage? I am trying to figure out how to scale my API that creates pulumi stacks on different cloud providers. I see that for each stack the memory and cpu usage is different. Curious how pulumi cloud handles the load from many clients. I had sequential execution of stacks via API but it's too slow for my needs. So I added logic to handle multiple requests in parallel, but now cpu/memory explodes and I need to think about server capacity or some limits on how many to allow in parallel. So, another approach I am considering is to have a queue for pulumi stack requests and maybe create a job per request where each job will have some standard cpu/memory profile.
l
Do you need multiple stacks? You can create resources in multiple clouds within a single stack. And if you do need multiple programs, then wouldn't you use multiple projects, not multiple stacks?
s
Here's what I am doing https://brokee.io. I want to be able to create cloud environments for multiple engineers in parallel. I am using pulumi automation API to create stacks. And to run automation API I created a backend API in golang, and using go routines to handle multiple requests concurrently. But each stack has different infrastructure and for some reason pulumi uses different CPU and memory, so I struggling to figure out a good scaling approach
l
If each stack has different infrastructure, then it should be different projects. Memory usage for automation-api programs is not something I have any experience with, unfortunately. Perhaps you can spawn a new process for each deployment? That should lift the memory and CPU constraints considerably.
s
Yes, I am considering this type of architecture, I'd probably need a queue for that where depending on the number of requests I will spawn an equal number of jobs. The app is running on Kubernetes