I'm running into an issue deleting some resources ...
# general
d
I'm running into an issue deleting some resources and it's unclear why. I have two resources (both on Google Cloud), one (let's call it the "backend") is listed as a dependency of the other (in both Google Cloud and Pulumi) (let's call it the "instance group manager"). They are both to be deleted, but Pulumi is trying to delete the "instance group manager" first, which fails because internally to Google Cloud the dependency is present... ideally Pulumi (which knows about this dependency) would delete the thing that's holding the dependency first (the "backend") then delete the thing depended upon (the "instance group manager") -- any ideas on how to fix this ?
Error is:
Copy code
error: deleting <instance group manager>: 1 error occurred:
	* Error waiting for delete to complete: Error waiting for Deleting RegionInstanceGroupManager: The instance_group_manager resource '<xxx>' is already being used by '<backend>'
Within the Pulumi stack:
Copy code
{
                 "urn": "<instance_group_manager_urn>",
...
                 "parent": "<customer_component_urn>",
                 "dependencies": [
                     "<iam_membership_urn>",
                     "<subnet_urn>",
                     "<template_urn>"
                 ],
...
             }
Copy code
{
                 "urn": "<backend_urn>",
...
                 "parent": "<custom_component_urn>"
                 "dependencies": [
                     "<instance_group_manager_urn>",
                     "<health_check_urn>"
                 ],
...
             }
So why would Pulumi ever delete the instance group manager before the backend? It seems like it would break the Pulumi stack configuration
Interestingly, the plan does not include deleting the
<instance_group_manager_urn>
at all
It looks like because before I set this entire custom component to be deleted, the instance group manager was being replaced (which failed, because it couldn't be deleted, because of the dependency that Pulumi knew about, but ignored)
I was able to manually destroy the instances... but now pulumi corrupted its stack
Running
pulumi up
produced:
Copy code
error: post-step event returned an error: failed to save snapshot: .pulumi/stacks/main.json: snapshot integrity failure; it was already written, but is invalid (backup available at .pulumi/stacks/main.json.bak): resource <firewall>'s dependency <subnet> refers to missing resource
And sure enough, the <subnet> resource is missing in the stack (but was not deleted from Google Cloud, while <firewall> exists and lists it as a dependency
Attempting to
pulumi stack import -s main < "main.json.bak"
fails with:
Copy code
error: could not deserialize deployment: unexpected end of JSON input
I guess that's just as well, since
main.json.bak
lacks <subnet> anyway, even though it said it made a backup, the backup was AFTER deleting that resource from the stack (but not deleting it from Google Cloud)