When you’re trying to hack your way around Bulk Im...
# general
m
When you’re trying to hack your way around Bulk Imports and shifting resources between Stacks:
s
I recently had to spend a fair amount of time in this area…perhaps I can help?
m
It all started when I wanted to move our DB from a monolithic stack into a its own micro stack. This should be done in multiple developer environments, main dev env, staging env and prod env. So obviously its the same source project (
acme-monolith
) and same target project (
acme-only-db
) but the stacks are different (3 + no_of_devs). And obviously “imports” should be on a per stack basis I stumbled upon
pulumi state export/import
but what I didn’t like is: 1. Huge json that is hard to look at manually to select resources - probably a interactive git like command line where you select which resources you want could have made my life easier (might write one) 2. Need to also search and replace
urn:pulumi:acme-monolith-1::acme-monolith
with
urn:pulumi:acme-db-only-1::acme-db-only
- but its fairly easy 3. Need to
pulumi stack change-secrets-provider default
to resecret everything - but its fairly easy 4. Need to
rm -f
complement resources from source stack states 5. Need to make sure the code everyone uses for
pulumi up -s acme-monolith
stops creating the resources that moved to the new stack - but if I just removes those resources from the code, and the code runs before the states migrates, then the resources will be deleted (😱 our dbs) so probably we need to add
retainOnDelete
in the old stacks and tell everyone to
pulumi up
on all stacks. But we don’t want
retainOnDelete
in the new stack - so there’s are multiple deployment waves that we have to synchronize for such a small thing that we want to do 6.
pulumi state import
is not previewable and I’m not sure how to set a correct security/organizational 4-eye+ policy for reviewing such horrific changes Well I tried doing something awful like: 1. pulumi state export --file export.json 2. jq operations to make export.json become valid for
pulumi import
operation which is reviewable 3. pulumi import that json, but it crashes since obviously this is not meant to be Feels like tooling is missing on client side - can be fixed by each org doing its own shtick But probably there also work that has to be done on the backends in order to support migration more natively - to prevent a resource existing in two stacks at once (like we have today when we migrate entire pulumi stacks between orgs in app.pulumi.com)
s
A few thoughts: • You can use
jq
(as you discovered) to work with resources in the state export file. To assist with building
jq
queries, you might try
jid
(JSON incremental digger, I believe it what it stands for). Very helpful when you aren’t 100% familiar with the schema of whatever JSON you’re working with. • WRT to your points 4 and 5, you can use
pulumi state delete
to remove objects from state, then refactor the code to remove those resources. Then running
pulumi up
should result in a no-op (no changes), because the program and the state match. But yes, you do want to ensure no one runs
pulumi up
between updating the state and refactoring the code. The
--target-dependents
flag for
pulumi state delete
might make it easier, if a lot of the resources fall under a comment parent resource. • And as far as no preview for the state import function---yep, you’re 100% correct. It’s one of the reasons this process/procedure is unsupported. You could consider a process like this: 1. Remove the resources from the original project/stack using
pulumi state delete
and refactor code to no longer create those objects. They now exist on the cloud provider, but are not under Pulumi management. 2. Use
pulumi import
to re-import those objects back into Pulumi, generating code along the way. Compare this to the code you removed in step 1 to make sure things line up. This is also not without its caveats, but it is a more supported procedure. We (Pulumi) do recognize there’s a gap here, and we’ve started thinking about how to address this. I don’t know if there’s an issue already open for tracking; if I find one, I’ll add it here and we’d love to hear your thoughts on how you think it should work/function.
m
s
That’s the one 🙂
Luke’s recent comment is, in part at least, due to some recent spelunking of mine into this process
m
Thanks for the detailed answer. I do acknowledge that the following functionality mitigates it: 1.
pulumi state import/export
only edits text files and doesn’t really changes resources. Subsequent
pulumi up
might be affected but at least they will go through 4-eye review policy 2. Pulumi’s backend keeps history - we can download older state if something goes horribly wrong I also think that you are correct that if we do a “stop the world” event at our org and synchronize everything - it will be okay. But some dev stacks might be in some inconsistent state and might require some time to fix back into a `pulumi up`able state, which sometimes might make this event longer that it should be - in addition it’s not always clear how you “go back”. Also, if assuming the worst, where everything happens async. I think this is the correct flow: 1. Block pulumi up 2. Remove the code in the code base (4eye review) 3. Make sure nobody `pulumi up`s the old code (4eye review anyway) 4. Delete the state from all stacks manually - write complicated script - no 4 eye policy, no easy way to view changes, required Pulumi proficiency 5. Unblock pulumi up 6. Floating resources should be wrangled back into new stacks - write complicated script - no 4 eye policy, no easy way to view changes, required Pulumi proficiency
image.png
I also do understand that this is a hard problem. And introducing unstable client tooling that are solving 90% of the problem and then have to be retracted all while educating the community how to use them is a hard process. So letting orgs try and work around those problems themselves and picking the winners in a free-market OSS spirit way is probably the correct way 🙂 Syncing desired state is hard - maybe with GPT-4 infused Pulumi it will be solved but until then I have a feeling we will have to process json files with `jq`s
s
LOL at the Skyrim reference 😄 It’s definitely an area where we want to improve. For now, though, taking a careful and measured approach can mitigate some of the risks, as you point out.
m
Is there a way to get the to-be-created urns of a non existing stack when I’m in its directory? pulumi up doesn’t have the
-u
flag I need it for filtering the state imports
g
It is possible to "guess" the URNs of non-existent resources but that's largely a manual operation you'd have to undertake. Here's the docs on the composition of URNs. https://www.pulumi.com/docs/intro/concepts/resources/names/#urns
l
It's one of the funnest parts of debugging Pulumi, that's for sure. Well done on getting through it. It'll be great if/when more powerful tools are provided. I think (well, hope) the
pulumi state import
base tool will always stay, like a git porcelain command; being able to completely overwrite the state has save me so much work in the past, it can never go away! (It's amazing for a complete rebrand of projects, stacks and resource names; a few search-and-replaces on a copy of your state, import it, and you've got a shiny new product!)