This message was deleted.
# general
s
This message was deleted.
m
HI Yaniv, Different ways to solve this. One question would be: Did you try to use Pulumi Deployments? It's a new service from Pulumi allows to use the RemotWorkspace in the automation api. This means the deployment will be done serverless from us. https://www.pulumi.com/docs/intro/pulumi-service/deployments/
h
we are using automation API and have a self service no code platform called Q-Cloud. ya the api uses a bit of memory for the pulumi container we have BUT this is only during execution as you mention. we have optimized a few things and works fine for 25 users / occasionally slows down during 5 or so stack updates concurrently. we thing the mem usage may be due to our proprietary layer on top of pulumi api to store each stack no code canvas state locally in a file system. Our stack is nodejs for api but use TS for the program... is your deployment containerized ?
d
Hi @many-telephone-49025 Never used this feature. I’ll read about it and try it. Thanks! There are also another solutions for this issue? @hallowed-horse-57635 The environment is containerized. Running inside docker (pulumi/pulumi-python)