https://pulumi.com logo
#general
Title
# general
p

proud-pizza-80589

09/02/2021, 6:42 AM
I’m coming back to some multistack management questions, it keeps biting me in the ass so I think i’m doing something wrong: Stack 1: deploys a k8s cluster (gke/eks/aks) -> exports kubeconfig Stack 2: imports the kubeconfig from stack 1 -> makes a provider -> deploys stuff If i change anything in stack 1 that causes the cluster to be recreated (exact case this time, changed a gke clusters amount of nodes)and pressed yes too quickly because it was part of a large changeset) the cluster redeploys, the old one is deleted and the kubeconfig is updated. Any actions in stack 2 fail horribly. Because even though the kubeconfig is replaced, all the deployed k8s resources are trying to work on the old clusters ip address. It is like they store that in their state and not use a refreshed value. Or is it the provider that holds that link, and would exporting the provider be better? As the stacks are quite large on their own, manually the state file is next to impossible to get right, even with scripts. At this point the only way I know I can recover, is delete the stack completely and clean up on the cloud provider myself. Feels like a common thing to happen, so I assume I’m doing it wrong. Any pointers?
b

bored-table-20691

09/02/2021, 6:44 AM
I’ve run into this as well (and makes me be concerned about changing anything that would cause the EKS cluster to be re-created).
I think part of the challenge is that if you change the Kubernetes link (e.g. kubeconfig changes), then the state thinks that it needs to delete it from the old cluster, which is obviously not available anymore, so that fails.
It would happen with similar things (e.g. you create RDS in one project/stack and DBs in that RDS in another - if you recreate the RDS, you will be somewhat hosed when you try and update the dependent stack)
p

proud-pizza-80589

09/02/2021, 6:50 AM
I should “protect” the cluster when it is final, that would prevent anything bad to happen to it, but at this point tuning it is part of the job to be done.
b

bored-table-20691

09/02/2021, 7:00 AM
Yes. In a way I would want to be able to say something like “for this provider, if the provider definition changed, just forget everything from the previous version, and just recreate it at all”. Currently the only way to do that is to destroy the entire stack basically.
p

proud-pizza-80589

09/02/2021, 7:22 AM
Yeah, that is a bit hard since i have several clusters in different providers in one stack, i probably going to need a single one per cluster.
s

salmon-guitar-84472

09/02/2021, 9:02 AM
do you reload your kubernetes resource provider with an import from EKS and GenerateKubeconfig from the cluster stack to the kubernetes stack, I have also found that EnableDryRun = true is quite helpful as well as running a pulumi refresh to rebuild whats actually in the cluster before kicking off on a new cluster
p

proud-pizza-80589

09/02/2021, 11:17 AM
that would not work as to get the refresh info, it needs to connect to the cluster, but it uses the old kubeconfig
b

bored-oyster-3147

09/02/2021, 11:36 AM
Forgive me if this makes no sense - I use ECS so not 100% familiar with k8s terms / Config Would an auto scaling group help? I think for K8s it’s called a “cluster auto scaler” but would that provide an additional layer of abstraction on your cluster potentially handling that pass through of new Config when you need to make scaling changes on the ASG?
But yea I would def recommend marking the problem clusters as protected in the mean time, if only so you can coordinate changes / downtime and are no longer surprised
I’m almost more surprised that the cloud provider even lets you delete the cluster when you still have nodes referencing that Config. I must be misunderstanding something about k8s there
p

proud-pizza-80589

09/02/2021, 1:13 PM
It is a bit different, i’m talking pulumi stacks. While you an get information (e.g. kubeconfig) from one stack, and use it to deploy something on that cluster, that stack is oblivious to anything that happens in that base stack. So if the kubeconfig changes, the stack using the base config does not realise this and tries to connect with the old kubeconfig.
Now if you change something that replaces the entire cluster, then that kubeconfig of the old cluster does not work (obviously)
A solution would be: • the importing stack checks if the output it used is the same. It could then just use the new value, and combined with --refresh it would notice that the pods are gone and redeploy them. • or alternatively, allow for destroy --do-your-best-and-ignore-leftovers, which you could combine with --delete-failures-from-the-state
Option 1 is very k8s specific since in other infra this is not necessarily the way to go
option 2 could leave leftovers, and in general ususally all the tools prefer to be very strict on you cleaning up yourself.
b

bored-oyster-3147

09/02/2021, 2:05 PM
Do you need to output the kubeconfig itself? Could you instead output some identifier for the cluster and have stack 2 fetch the kubeconfig on execution?
3 Views