https://pulumi.com logo
#general
Title
# general
r

red-king-31563

01/16/2024, 7:09 AM
👋 Hi everyone, yesterday, doing a pulumi up on a stack with ~140 aws resources, i’ve encountered a similar issue to https://github.com/pulumi/pulumi-aws-native/issues/854 which led to pulumi in turn deleting everything that was following after the rate/retry limit was hit. thank god it was only policies and policy attachments being lost but this got me a bit scared.. how do i mitigate this? just increasing the
maxRetries
as suggested in https://github.com/pulumi/pulumi-aws-native/pull/862 ? any other strategies/examples to work around this? thanks a lot!
s

straight-beach-79533

01/16/2024, 12:07 PM
I don't know how much this suggestion could apply to you, but in general this is called "reduce blast radius". Try to split these 140 resources in two (or 3), along dependency and frequency of change lines. Then, put each group in its own Pulumi project. To link information from one project to the other, use https://www.pulumi.com/docs/using-pulumi/stack-outputs-and-references/ For an introduction to the concept of multiple, dependent projects, see https://www.pulumi.com/docs/using-pulumi/organizing-projects-stacks/ This approach, reduce blast radius, is not specific to Pulumi. I do the same with Terraform. Last point: the moment there is more than one project with dependencies among them, one cannot use anymore
pulumi up
. Instead, one has to script the invocation of the projects in the correct order. Is it worth it? I think it is mandatory best practice for any prod that reaches a certain size...
r

red-king-31563

01/16/2024, 2:53 PM
thanks a lot for the advice, i asked for different strategies, and this certainly is an option. i’ll check it out 🙂