is there any support in pulumi EKS (crosswalk or s...
# kubernetes
b
is there any support in pulumi EKS (crosswalk or standard) for upgrading clusters (https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html)? 1.15 support was just launched, and upgrading the
pulumi.eks.Cluster
version wants me to tear down everything, which is really not ideal.
this may just be the provider being updated and confusing things?
yep, upgrade worked correctly! nm.
b
What output did you see and what did you expect to see?
yep, upgrade worked correctly! nm.
Ah, great. So the update worked as planned?
b
the issue is that the cluster things the provider is going to change, and claims that everything will be replaced (node pools, literally all kubernetes objects that use that provider)
but after it realizes the provider text hasn't changed, it ends up only updating the actual cluster object (and version).
it should probably also update the default node pool, because now i'm at a loss as to how to update that, but it does basically work as intended.
b
Got it. Thanks for the details. IIUC seems like the pulumi tool could conservatively be showing changes in the previews, but since the kubeconfig does not change, the rest won’t change either in practice. Seems similar to this comment https://github.com/pulumi/pulumi-eks/issues/338#issuecomment-589998046 Does that ring true for you?
it should probably also update the default node pool, because now i’m at a loss as to how to update that, but it does basically work as intended.
The default node group is not a part of the control plane. It is a self-managed node group that pulumi creates on your behalf. [1] It is recommended to manage node groups separately from the control plane, that is, skipping the default node group and creating the node groups directly yourself [2] or having EKS manage them [3] for you. Note there are some trade offs [4] but if you manage it yourself [5] exists to help understand migrations, though this applies to managed nodegroups too (aws just handles the nodes but not your workloads) 1 - https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/cluster.ts#L1069-L1077 2 - https://github.com/pulumi/pulumi-eks/tree/master/nodejs/eks/examples/nodegroup 3 - https://github.com/pulumi/pulumi-eks/tree/master/nodejs/eks/examples/managed-nodegroups 4 - https://eksctl.io/usage/eks-managed-nodegroups/#feature-parity-with-unmanaged-nodegroups 5 - https://www.pulumi.com/docs/tutorials/kubernetes/eks-migrate-nodegroups/
b
the comment rings true, absolutely. i've seen pulumi replace things unexpectedly in the k8s space (mostly Deployments), so i was especially worried.
as far as node groups go, i understand that managing our own node groups is the best long-term strategy - but it's very weird that i now have an EKS cluster provisioned by pulumi whose settings will not match the settings of a cluster spun up from scratch. this isn't how a declarative interface should work ...
(thanks for the links, though, those will be very helpful for me to solve this issue!)
b
weird that i now have an EKS cluster provisioned by pulumi whose settings will not match the settings of a cluster spun up from scratch.
Could you please elaborate where the match is not occurring?
b
the default node pool i'm running is running with kubernetes version 1.14, but if i spin up a new stack, it will use the declared version (1.15).
updating the kubernetes version didn't seem to trickle-down to the launchconfiguration or w/e
b
That seems like odd behavior as we do trickle-down the version for the node AMI used [1] - https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/nodegroup.ts#L388 Do you mind opening up an issue with a simple repro so we can investigate further?
b
indeed, it struck me as pretty odd.
i'll try to set up a repro over the next couple of days. 🙂
🎉 1
b
Thank you!
b
i think it's literally: - set up a cluster with 1.14, with some nodes (say, 2) - update to 1.15 ... but i'll try to formalize 🙂
b
Tried this with a cluster on 1.14 and a self-managed node group attached to it. The control plane updated fine to 1.15, and the node groups remain on 1.14 given that they’re separate in this case. I’ll re-try with the default node group instead of a self-managed one. To ask a more clarifying question, were you expecting the default node group version to match the control plane version on an update, and in your run you did not see this?
b
correct, and this is because i didn't specify a node group version, and expected it to inherit from the control plane like it does on creation.
b
👍🏻 1