# general


01/16/2024, 7:09 AM
👋 Hi everyone, yesterday, doing a pulumi up on a stack with ~140 aws resources, i’ve encountered a similar issue to which led to pulumi in turn deleting everything that was following after the rate/retry limit was hit. thank god it was only policies and policy attachments being lost but this got me a bit scared.. how do i mitigate this? just increasing the
as suggested in ? any other strategies/examples to work around this? thanks a lot!


01/16/2024, 12:07 PM
I don't know how much this suggestion could apply to you, but in general this is called "reduce blast radius". Try to split these 140 resources in two (or 3), along dependency and frequency of change lines. Then, put each group in its own Pulumi project. To link information from one project to the other, use For an introduction to the concept of multiple, dependent projects, see This approach, reduce blast radius, is not specific to Pulumi. I do the same with Terraform. Last point: the moment there is more than one project with dependencies among them, one cannot use anymore
pulumi up
. Instead, one has to script the invocation of the projects in the correct order. Is it worth it? I think it is mandatory best practice for any prod that reaches a certain size...


01/16/2024, 2:53 PM
thanks a lot for the advice, i asked for different strategies, and this certainly is an option. i’ll check it out 🙂