I keep having issues with projects and stack refer...
# general
p
I keep having issues with projects and stack references. One stack deploys k8s clusters, then several stacks and environments use that stack to deploy on the clusters. As we are still finetuning the cluster setup, every now and then the clusters (or one of them) need to be recreated. At that point, all referencing stuff deployed on these clusters starts freaking out. I know i can just remove the stacks themselves, but that would cause me to have to redo the configs for dozens of stacks. Either our setup is bad, or i need either
pulumi destroy --if-failed-just-delete-from-stack
or a script that can “deep” delete everything from the stack export that depends on the stack reference. Anyone have such a such a script by any chance?
b
if you have to recreate the cluster, can you do a targeted destroy in the dependent stacks first - targeting everything beneath that cluster dependency?
p
sometimes, we have hundreds of stacks atm. and they all deploy helm charts with god knows how many underlying resources. our stack esport files are immense
b
I'm not saying modify stack export files. The problem here is that you are letting an update on the parent stack go through without first doing a destroy on the child stacks to prepare for the parent update. When the parent update goes through none of the child stacks should still depend on it, otherwise you have issues
you can do
pulumi destroy --target [string array of URNs that depend on cluster] --target-dependents
p
yeah, i realise that, it just happens every now and then, especially since quite a few of the stacks are dynamically generated using the automation api. It is not like there is a manageable list of things to do in order.
this is the best i have for now
Copy code
STACKS=$(pulumi stack ls -j | jq -r ".[].name" | sed "s/settlemint\///")
while read -r stack; do
  # Pulumi.development-gke-europe-test24.yaml
  pulumi stack select settlemint/launchpad-services/${stack}
  pulumi config refresh

  # Cancel
  pulumi cancel --yes -s settlemint/launchpad-services/${stack}

  # Failed states
  pulumi stack export | jq "del(.deployment.pending_operations)" | pulumi stack import

  # Destroy
  pulumi destroy --yes -s settlemint/launchpad-services/${stack}

  pulumi stack export | jq "del(.deployment.resources)" | pulumi stack import

  # Destroy
  pulumi destroy --yes -s settlemint/launchpad-services/${stack}
done <<< "$STACKS"
b
I think you want automation API then. In automation API you could: 1. run
pulumi preview
on parent project 2. check if cluster is getting recreated 3. If recreated, do
workspace.ListStacks
in your child project and do targetted destroy on each 4. Else,
pulumi update
parent
p
hmm, that might work! (for the next time, knee deep in dirty destroys atm ;))