How can I make pulumi accept that certain infrastr...
# aws
s
How can I make pulumi accept that certain infrastructure components should remain running on the down operation? For example I have state which is stored in an S3 bucket. I do not intend to force delete the bucket and destroy the entire infrastructure. I only want to delete all the surrounding compute nodes. How can I tell pulumi to be fine with keeping specific objects (actually particular s3 buckets)? Basically I want to save on (compute) cost and tear down non used instances and nat gateways and databricks instances. Would I need to structure pulumi into stack references where individual projects ban be down-ed separately? Or do you see any means of accomplishing this within a single pulumi project/file?
l
The best design is to use different projects. Ideally, your projects should group together all the resources with the same deployment schedule. If some resources are intended to be longer-lived than others, then put them in a different project.
To work around this right now, you can kick the long-lived resource out of state (
pulumi state delete
), then destroy the remainder of the stack.
s
thanks
v
interesting discussion, so it's a good setup to segregate stateful resources and stateless resources ? for example creating one pulumi project for provisioning rds databases and one project for my eks clusters ? what is advantages and disadvantages ?
s
I am just getting started. Are there som blueprints or more recommendations on how to structure these? What I understand sow far slowly changing (network related stuff VPCs, ...) should be separate. Stateful topics like Buckets and RDS should be separate. And stateless components should be in their own project. Regarding advantages: individual management, reause, faster pulumi up/sync commands as only pieces of the resources need to be touched. Regarding downsides: it might be more separate/segregated and dependong on how you set up more complex. I guess as long as it is at least a monorepo and your IDE can easily search all modules it should not be too problematic.
v
and what about the environments (ie: DEV,TEST,PROD etc...)? how do you managed them? in a pulumi project for each ?
i'm starting too, I'm provisioning everything from one project but I have concerns about the maintenance in the long term, i'm look for best practices
s
these easily work via a different concept of stacks
l
@victorious-architect-78054 I guess when you say stateful / stateless, you mean "storing persistent data" and "providing a service"? You should separate these into the projects not based on the data they may or may not contain, but on their deployment cycles. If you need to destroy the data any time you destroy the cluster, then they should likely be in the same project. If you need your data backups to last longer than your databases (which you almost certainly do), then they should be in different projects. Environments almost always map to stacks. In most cases, most projects (e.g. network, backup, logs, application hosting...) will have the same stacks (e.g. dev, staging, APAC, EMEA..).
s
This means that I can easily have a couple (5-ish) of related projects. Are there any neat ways out of the box to orchestrate them overall?
v
@little-cartoon-10569 ok I got it
l
@sparse-optician-70334 You probably don't want too much in the way of orchestration between the projects. The reason they are different projects is because they have different deployment schedules. So you usually can't say "after stack dev in project LandingZone is deployed, then deploy stack dev in project AuthApi".. if there was a rigid dependency like that, then there'd be just one project.