I’m coming back to some multistack management questions, it keeps biting me in the ass so I think i’m doing something wrong:
Stack 1: deploys a k8s cluster (gke/eks/aks) -> exports kubeconfig
Stack 2: imports the kubeconfig from stack 1 -> makes a provider -> deploys stuff
If i change anything in stack 1 that causes the cluster to be recreated (exact case this time, changed a gke clusters amount of nodes)and pressed yes too quickly because it was part of a large changeset) the cluster redeploys, the old one is deleted and the kubeconfig is updated.
Any actions in stack 2 fail horribly. Because even though the kubeconfig is replaced, all the deployed k8s resources are trying to work on the old clusters ip address. It is like they store that in their state and not use a refreshed value. Or is it the provider that holds that link, and would exporting the provider be better?
As the stacks are quite large on their own, manually the state file is next to impossible to get right, even with scripts. At this point the only way I know I can recover, is delete the stack completely and clean up on the cloud provider myself.
Feels like a common thing to happen, so I assume I’m doing it wrong. Any pointers?