rapid-belgium-27383
08/06/2024, 5:06 PMmodern-zebra-45309
08/06/2024, 5:21 PMmodern-zebra-45309
08/06/2024, 5:24 PMrapid-belgium-27383
08/06/2024, 5:29 PMmodern-zebra-45309
08/06/2024, 5:31 PMpulumi up
. If you just want to change something about the nodepool (e.g., machine type) you simply change this particular argument and Pulumi will know whether this requires recreation of the nodepool (in which case it will, by default, make the new one, then delete the old one) or can be done by changing the existing nodepool.modern-zebra-45309
08/06/2024, 5:32 PMmodern-zebra-45309
08/06/2024, 5:33 PMrapid-belgium-27383
08/06/2024, 5:37 PMmodern-zebra-45309
08/06/2024, 5:39 PMmodern-zebra-45309
08/06/2024, 5:40 PMmodern-zebra-45309
08/06/2024, 5:42 PMrapid-belgium-27383
08/06/2024, 5:46 PMrapid-belgium-27383
08/06/2024, 5:46 PMmodern-zebra-45309
08/06/2024, 5:48 PMclusterId
and the zone
.rapid-belgium-27383
08/06/2024, 6:32 PMmodern-zebra-45309
08/06/2024, 10:41 PMmodern-zebra-45309
08/06/2024, 10:47 PMrapid-belgium-27383
08/07/2024, 8:57 AMrapid-belgium-27383
08/07/2024, 8:59 AMrapid-belgium-27383
08/07/2024, 9:00 AMrapid-belgium-27383
08/07/2024, 9:00 AMmodern-zebra-45309
08/07/2024, 9:01 AMmodern-zebra-45309
08/07/2024, 9:03 AMFor instance, once scaled up, I can have 3 nodes with version 1.29.7 and 3 nodes with version 1.30.3, all in the same nodepool. Then I drain the 1.29.7 nodes and delete them.It doesn't look like the SksNodepool resource exposes the Kubernetes version. How do you control it right now? Is it just using the latest version when creating the nodepool?
rapid-belgium-27383
08/07/2024, 9:04 AMrapid-belgium-27383
08/07/2024, 9:04 AMmodern-zebra-45309
08/07/2024, 9:05 AMmodern-zebra-45309
08/07/2024, 9:06 AMname=my-nodepool-${cluster.version}
or something like this, and replace when the name changesrapid-belgium-27383
08/07/2024, 9:06 AMrapid-belgium-27383
08/07/2024, 9:07 AMmodern-zebra-45309
08/07/2024, 9:07 AMrapid-belgium-27383
08/07/2024, 9:07 AMrapid-belgium-27383
08/07/2024, 9:08 AMmodern-zebra-45309
08/07/2024, 9:08 AMrapid-belgium-27383
08/07/2024, 9:08 AMmodern-zebra-45309
08/07/2024, 9:10 AMresources:
my-resource:
type: does:not/exist
properties:
name: this-is-the-name-with-${cluster.version}
options:
replaceOnChanges:
- name
modern-zebra-45309
08/07/2024, 9:12 AMrapid-belgium-27383
08/07/2024, 9:14 AMrapid-belgium-27383
08/07/2024, 10:01 AMmodern-zebra-45309
08/07/2024, 10:07 AMdeleteBeforeReplace
to True
, the new nodepool will be created first and then the old nodepool will be deleted. This will allow your workloads to shift, it's a very common pattern with Kubernetes clusters. You can trust Kubernetes to handle this re-scheduling for you.modern-zebra-45309
08/07/2024, 10:09 AMget
functions for resources) but the control over replacements and deletions is exactly the same.modern-zebra-45309
08/07/2024, 10:14 AMrapid-belgium-27383
08/07/2024, 12:03 PMresources:
my-resource:
type: does:not/exist
properties:
name: this-is-the-name-with-${cluster.version}
options:
replaceOnChanges:
- name
A new nodepool is created and the previous one is deleted.
Also, the workload is migrated to the new nodepool as it’s Kubernetes job to do that part.
But (sorry there is a “but” 🙂 ), the old nodepool is deleted just after the new nodepool is created. Then Kubernetes takes a few tens of seconds before deciding to move the workloads. So just after the old nodepool is deleted, the workload is not running anymore (interuption of service), we have to wait for kubernetes to detect the nodes are not there and then to reschedule the workloads on the new node.
The proper way to do that would be to have both nodepools in parallel, then to drain the old nodes, then make sure everything is correctly rescheduled, then to delete the old nodepool.modern-zebra-45309
08/07/2024, 12:04 PMmodern-zebra-45309
08/07/2024, 12:07 PMrapid-belgium-27383
08/07/2024, 12:25 PMrapid-belgium-27383
08/07/2024, 12:31 PMmodern-zebra-45309
08/07/2024, 12:48 PMrapid-belgium-27383
08/07/2024, 12:57 PMmodern-zebra-45309
08/07/2024, 1:19 PMrapid-belgium-27383
08/07/2024, 1:25 PMrapid-belgium-27383
08/07/2024, 1:26 PMrapid-belgium-27383
08/07/2024, 1:30 PMmodern-zebra-45309
08/07/2024, 1:31 PMrapid-belgium-27383
08/07/2024, 2:29 PMmodern-zebra-45309
08/07/2024, 2:30 PMmodern-zebra-45309
08/07/2024, 2:31 PMrapid-belgium-27383
08/07/2024, 2:32 PMmodern-zebra-45309
08/07/2024, 2:33 PMmodern-zebra-45309
08/07/2024, 2:35 PMrapid-belgium-27383
08/07/2024, 5:49 PMmodern-zebra-45309
08/07/2024, 5:56 PMmodern-zebra-45309
08/07/2024, 5:58 PMDo you think each provider maintain its Pulumi library ?I don't think so. A "provider" is a component in Pulumi (and Terraform) that handles the communication with the platform. So there's an AWS provider (several, actually), a Kubernetes provider...
modern-zebra-45309
08/07/2024, 5:59 PMmodern-zebra-45309
08/07/2024, 6:00 PMrapid-belgium-27383
08/07/2024, 6:09 PMmodern-zebra-45309
08/07/2024, 6:11 PMrapid-belgium-27383
08/08/2024, 8:40 AMmodern-zebra-45309
08/08/2024, 8:43 AM