Hi there, Don’t know if that’s a “feature” but cha...
# google-cloud
f
Hi there, Don’t know if that’s a “feature” but changing the labels of a
NodePool
makes pulumi deleting and recreating the pool … which is obviously not what I want in production…
Copy code
Previewing update (test):
     Type                          Name                          Plan        Info
     pulumi:pulumi:Stack           est                         
     └─ gcp-clusters:Cluster       cluster                        
 +-     └─ gcp:container:NodePool  cluster-pool-main  replace     [diff: ~nodeConfig]
Did I miss something ? (I literally only added a label. Just to ensure there’s no side effects, if I remove this addition in the code, pulumi don’t ty to update anything anymore)
e
Have you tried pulumi.IgnoreChanges?
f
don’t know this one
but that’s kind of weird to have to explicitly tell pulumi that, no, a label is not a valid reason to completely destroy a node pool 😓
g
It's not something on Pulumi, you can't change labels of a node pool. That is on GKE API. Since you changes an immutable field, the only way for Pulumi to do it is to recreate the pool
f
@green-school-95910 Ok, didn’t know that, thanks for pointing that out. So that’s a “feature” from GKE. Seems impractical when you want to change/add a label on a production platform… you can’t expect to put everything down for that ! (I clearly miss the point here).
g
Well. Even if GKE allowed that it would have to recreate the pool anyway, but it would happens under the hood. The node labels are set as command line parameters when they are initialized, you need to restart a node to change that. Although k8s allows to add labels to a node dynamically with their API, those labels are only for one specific node, not a whole pool. So using such labels would break the autoscaler. Let's say it adds the label dynamically, you add label A and then a pod with a node selector preventing label A. The autoscaler don't know that label A will be on all new nodes of that pool as it was added dynamically and is not on the template definition. It adds a node to the pool, the GKE controller adds label A to it. The pod is still unschedulable even though the autoscaler added a node for it. They would have to change a whole lot of code to make it possible and probably there are some other problems that I'm not seeing
f
ok, thanks for taking the time to explain all of that @green-school-95910. That clearly points out that I need to have a better understanding of the complex implications a label has at both nodes and pods level, which explains why one should not be taking lightly changing labels once in production.