This message was deleted.
# general
s
This message was deleted.
b
Good question. The actual calls to the cloud provider happens from where you run
pulumi up
from (so your laptop / data centre / etc). They don't go through any Pulumi servers. There are some calls made to the Pulumi SAAS if you're using that as a backend, but I don't think they take long. I'm based in the UK and I don't see that many speed problems talking to Pulumi servers
c
Thank you for the answer. Now I can see that I am dumb for not thinking about the fact that I am creating a cluster in the united states and probably the fact that pulumi creates one or few resource per time will probably impact the speed.
b
So we do create the resources asynchronously as much as possible (except where we can't because there's a dependency)
Also, not a dumb assumption. By making the calls from where the Pulumi CLI is being run it also means that we don't have to pass your cloud credentials anywhere and they stay safe
m
Hi, @clever-crowd-18899 I will suggest you run those commands from your CI/CD system as much closer as possible to your final destiny. Another thing that will be nice to you test how many processes in parallel you want to run: https://www.pulumi.com/docs/reference/cli/pulumi_up/ -p, --parallel int Allow P resource operations to run in parallel at once (1 for no parallelism). Defaults to unbounded. (default 2147483647). As @brave-planet-10645 said the default is as much as possible. The last tip will be about the Pulumi service that will store your states, if your number of resources is a little bit huge you can opt for that option: https://www.pulumi.com/docs/guides/self-hosted/
b
I should point out that the self hosted version is part of the enterprise plan that we offer. You can use S3 / Azure Blob storage / Google Cloud storage but you don't get some of the added extras like RBAC or state locking https://www.pulumi.com/docs/intro/concepts/state/
👍 1
c
Thank you for the amazing tips
I will try them