This message was deleted.
# kubernetes
s
This message was deleted.
b
do you do any
dependsOn
in your LOB apps?
a
like... a LOB app can depend on something in another stack?
Within an app, we use
dependsOn
to connect resources, yes. Not everything in a app would have a
dependsOn.
b
oh I see, so these are using stack references?
a
yes... we build the k8s cluster in an app and save the kubeConfig ... then the core resources gets a stackReference to the cluster app kubeConfig and uses it to install core resources... then the LOB pulumi apps get the same StackReference and use the kubeConfig to install LOB apps
b
ah, so you're talking about situations where you destroy the stacks in the wrong order?
a
yes
b
got it, if you file an issue for this we might be able to put the destroy behind an env var
a
We delete the parent without deleting the dependent children and then we have to manually go clean up the children
You mean an env var on the cluster app would cause it to go destroy/clear state of any stack that had declared in some manner a dependency on it?
b
the previous behaviour of the Kubernetes provider was to assume if the API was gone, the services were gone. We reverted that behaviour because there are instances in which you get network timeouts and it could cause the state to be destroyed but the apps still exist. I think if you know the API has gone and want to clear your state, you should be able to do that - which I tnink would help you, right?
a
If an app took a dependency on a .kubeConfig and then detect that the .kubeConfig is gone or has changed (for the same StackReference), when we do a
pulumi up
or
pulumi destroy
it simply clears out the state first and then proceeds is what I think I want.
Thinking about it more, the dependent app may have more the k8s resources in it and completely cleaning it out the state may be dangerous
But I'd love the state for k8s resources to be cleared out if the kubeConfig was detected to have changed
😄