<@UE1LN7A22> Hey Levi... Not sure whom else to ask...
# kubernetes
a
@gorgeous-egg-16927 Hey Levi... Not sure whom else to ask.. but we run into this scenario a lot with our development clusters and was wondering how you felt or what you thought. We use pulumi in separate apps to: 1. Stand up the cluster 2. Install core k8s resources (fluentd, certmanager, nginx/traefik, etc) 3. Install LOB apps Since the clusters are for lower environments we tear them down occassionally. The challenge with this is unless we tear out all of the installed k8s resources first, the pulumi state for all of the k8s resources is never cleared out and we have to go do that manually. Which we understand but it still happens on occasion. Is there a feature or technique present in pulumi where if a "parent stack" is deleted, all child stacks state are automatically cleared out as well?
b
do you do any
dependsOn
in your LOB apps?
a
like... a LOB app can depend on something in another stack?
Within an app, we use
dependsOn
to connect resources, yes. Not everything in a app would have a
dependsOn.
b
oh I see, so these are using stack references?
a
yes... we build the k8s cluster in an app and save the kubeConfig ... then the core resources gets a stackReference to the cluster app kubeConfig and uses it to install core resources... then the LOB pulumi apps get the same StackReference and use the kubeConfig to install LOB apps
b
ah, so you're talking about situations where you destroy the stacks in the wrong order?
a
yes
b
got it, if you file an issue for this we might be able to put the destroy behind an env var
a
We delete the parent without deleting the dependent children and then we have to manually go clean up the children
You mean an env var on the cluster app would cause it to go destroy/clear state of any stack that had declared in some manner a dependency on it?
b
the previous behaviour of the Kubernetes provider was to assume if the API was gone, the services were gone. We reverted that behaviour because there are instances in which you get network timeouts and it could cause the state to be destroyed but the apps still exist. I think if you know the API has gone and want to clear your state, you should be able to do that - which I tnink would help you, right?
a
If an app took a dependency on a .kubeConfig and then detect that the .kubeConfig is gone or has changed (for the same StackReference), when we do a
pulumi up
or
pulumi destroy
it simply clears out the state first and then proceeds is what I think I want.
Thinking about it more, the dependent app may have more the k8s resources in it and completely cleaning it out the state may be dangerous
But I'd love the state for k8s resources to be cleared out if the kubeConfig was detected to have changed
😄