Hello all. I wonder if anyone can give some advice...
# kubernetes
h
Hello all. I wonder if anyone can give some advice on the following. I am using the official Pulumi Amazon EKS package to create a K8s cluster. I used the Cluster.nodeGroupOptions input to set up the worker nodes but now need to upgrade the cluster version. In the past I would have done this by setting up a new node group in parallel with the old one and then migrating the workloads over. It seems I have shot myself in the foot by using
nodeGroupOptions
as I get an error
Setting nodeGroupOptions, and any set of singular node group option(s) on the cluster, is mutually exclusive. Choose a single approach.
when trying to spin up a NodeGroup (or ManagedNodeGroup) alongside the existing one. Does anyone have any advice as to how to perform a zero downtime worker node upgrade i.e. without having to tear them all down?
b
@helpful-morning-53046 can you share the code you're attempting? you can definitely have multiple node groups
h
Hi @billowy-army-68599, thanks for getting back to me. In this case I didn't declare the node groups manually, instead using the
eks.Cluster
input property
nodeGroupOptions
to create the groups. In attempting to declare another
eks.NodeGroup
manually I am receiving the error message mentioned above. I am interpreting this to mean
nodeGroupOptions
are a convenient way to create single node groups but whenever you need any more complex configurations you should use
eks.NodeGroup
or
eks.ManagedNodeGroup
.
My problem now is I have live infrastructure setup using
nodeGroupOptions
but need to migrate, with zero downtime, over to the more flexible manual declarations. This was proving difficult due to the approaches being incompatible with each other. I had a realisation last night at I could just remove the nodegroups from the Pulumi state to keep them running but allow Pulumi to create new node groups alongside them.