10/23/2019, 2:12 PM
Hi! Is there a way to somehow profile time needed for different tasks during deployment (
pulumi up
)? The Stack's number of resources grew over time and so did the time needed for a deployment (also for updates in a single lambda). I'm thinking of many different approaches of how to speed up thinks, but it would be great to be able to prioritize tasks based on the expected speedup. Metrics I would be interested in could include: - time needed to run diff before update - time needed to upload a lambda - time needed for requests to Pulumi backend - maybe also overall and split into resource-types We are already thinking about splitting the application into several stacks. But still, I'd like to perform an informed choice of where and how to start optimizing our deployment times. PS: last deployment took about 36 minutes: - 36 updated ressources - 476 unchanged resource Kind regards, Chris


10/23/2019, 3:36 PM
We’ve made some very significant performance improvements in the recent versions of
and the
CLI. If you aren’t already on the latest - I’d suggest trying those to see if things have improved. Based on what you describe - this sounds like one of the issues that was recently addressed. If not - you can grab a trace and share with us by following:


10/24/2019, 2:28 PM
CLI version should be the latest: in CI/CD we install the newest version on every invocation Furthermore, we're only using @pulumi/pulumi and @pulumi/aws. As I saw in the issues you optimized some things a while ago? I'll check against the newest versions and probably grab a trace. Thanks
@white-balloon-205 I sent you a trace in DM. Pulumi-CLI: 1.4.0 resource plugin: aws-0.18.27 in package-lock.json: @pulumi/pulumi: 0.17.28 @pulumi/aws: 0.18.27 Wait, just realized, the required npm-packages are pretty outdated. Will update pulumi npm packages and report back.
after updating all packages to current version,
pulumi up
took even longer: 50 minutes for 36 updated and 476 unchanged ressources. Sadly I couldn't get a trace fir this update, because CI/CD ran out of memory