Hey anybody else suddenly unable to create/update ...
# general
t
Hey anybody else suddenly unable to create/update k8s deployments? Just updated pulumi and trying to figure out if it’s something with GKE or latest pulumi?
b
What symptoms are you finding @thousands-london-78260?
Did you update any versions of the CLI?
t
Ok, no it’s definetly something with Google, but they’ve recently updated GKE and I think there might be some problems with creating nodePools now because trying to spin up a cluster we’re now getting a 400 -
googleapi: Error 400: Can't parse NodePool version ""., badRequest
where this is the function used to generate the nodePool config
Copy code
function standardPool(machineType: string, initialNodeCount = 0) {
  return {
    upgradeSettings: {
      maxSurge: 2,
      maxUnavailable: 0,
    },
    name: machineType,
    initialNodeCount,
    autoscaling: {
      minNodeCount: 0,
      maxNodeCount: 10,
    },
    nodeConfig: {
      machineType,
      diskSizeGb: 30,
      preemptible: false,
      shieldedInstanceConfig: {
        enableIntegrityMonitoring: true,
        enableSecureBoot: true,
      },
      metadata: {
        "disable-legacy-endpoints": "true",
      },
      oauthScopes: [
        "<https://www.googleapis.com/auth/devstorage.read_only>",
        "<https://www.googleapis.com/auth/logging.write>",
        "<https://www.googleapis.com/auth/monitoring>",
        "<https://www.googleapis.com/auth/servicecontrol>",
        "<https://www.googleapis.com/auth/service.management.readonly>",
        "<https://www.googleapis.com/auth/trace.append>",
      ],
      //TODO: More secure config
      //   serviceAccount: serviceAccount.accountId,
      //   oauthScopes: ["<https://www.googleapis.com/auth/cloud-platform>"]
    },
    version: "",
    management: {
      autoRepair: true,
      autoUpgrade: true,
    },
  } as gcp.types.input.container.ClusterNodePool;
}
same exact code used to work when I created cluster a few weeks ago
Ok seems that there’s now a big difference between cluster node_pool and normal node_pool
b
Ok so something in actual GCP or with the Pulumi provider?
t
I think what has changed is that terraform now has container.NodePool but I was still creating a cluster.nodePools (no longer recommended). The solution seems to be to create the node_pools AFTER creating the cluster. When creating the node_pools during the cluster creation you need to specify a version I think
Tip: you can use the
removeDefaultNodePool: true
, option to only have your custom node pools created after (which was the reason I created them for the cluster before).
I think the problem isn’t so much with Pulumi rather with external vendors (google in this case) changing how things work, terraform adopting those changes and finally trickling down into pulumi, making it look like your code suddenly stops working but being shielded behind several layers of CHANGELOGS to see what is new. I won’t be able to see in Pulumi docs that “this is now how you create a cluster”, I have to follow errors all the way back up to the vendor