12/28/2022, 8:35 PM
Hello, I have developed a platform that allows students to access exercises (in cyber security) and initiate the creation of a new environment on AWS, including a VPC, subnets, security groups, and instances. I am using Python multiprocessing and Pulumi automation to create a new stack and use Pulumi’s
command to create the environment. However, when more than 10 students access the platform simultaneously, I experience issues. It seems that each Pulumi process consumes approximately 200 MB of memory, causing the server to crash or become slower. Could you provide some suggestions for optimizing the process of creating environments in a parallel manner? It is also important that the environment be created as quickly as possible. Thank you.


12/28/2022, 9:31 PM
HI Yaniv, Different ways to solve this. One question would be: Did you try to use Pulumi Deployments? It's a new service from Pulumi allows to use the RemotWorkspace in the automation api. This means the deployment will be done serverless from us.


12/29/2022, 5:52 AM
we are using automation API and have a self service no code platform called Q-Cloud. ya the api uses a bit of memory for the pulumi container we have BUT this is only during execution as you mention. we have optimized a few things and works fine for 25 users / occasionally slows down during 5 or so stack updates concurrently. we thing the mem usage may be due to our proprietary layer on top of pulumi api to store each stack no code canvas state locally in a file system. Our stack is nodejs for api but use TS for the program... is your deployment containerized ?


12/29/2022, 9:03 AM
Hi @many-telephone-49025 Never used this feature. I’ll read about it and try it. Thanks! There are also another solutions for this issue? @hallowed-horse-57635 The environment is containerized. Running inside docker (pulumi/pulumi-python)