This message was deleted.
# general
s
This message was deleted.
b
@many-psychiatrist-74327 is this for the default nodepool? if so, so that's expected behaviour. can you share your code?
m
sure, give me a sec
this is the cluster definition:
Copy code
const cluster = new gcp.container.Cluster(
    name,
    {
      name: name,
      location: config.require("region"),
      network: vpc.name,
      subnetwork: subnetwork.name,

      privateClusterConfig: {
        // Make cluster nodes private.
        enablePrivateNodes: true,
        // Allow external access to k8s API (e.g. for us to use kubectl).
        enablePrivateEndpoint: false,
        masterIpv4CidrBlock: "172.10.0.0/28",
      },
      networkingMode: "VPC_NATIVE", // Required for private clusters.
      ipAllocationPolicy: {
        // These values were chosen by keeping the "10." prefix and choosing
        // low values for the second octet, to minimize probability of collisions
        // with GCP default blocks (which tend to have a high second octet).
        // The block sizes are the same as GCP uses by default.
        servicesIpv4CidrBlock: "172.21.0.0/20",
        clusterIpv4CidrBlock: "172.22.0.0/16",
      },

      nodePools: [
        {
          name: "pool-1",
          initialNodeCount: 2,
          autoscaling: {
            minNodeCount: 1,
            maxNodeCount: 10,
          },
          nodeConfig: {
            serviceAccount: serviceAccount.email,
            oauthScopes: ["<https://www.googleapis.com/auth/cloud-platform>"],
            machineType: "e2-medium",
            imageType: "UBUNTU",
          },
        },
      ],

      releaseChannel: {
        channel: "REGULAR",
      },
      minMasterVersion: "1.21.5-gke.1302",
      nodeVersion: "1.21.5-gke.1302",
      resourceUsageExportConfig: {
        bigqueryDestination: {
          datasetId: meteringDatasetId,
        },
      },
      notificationConfig: {
        pubsub: {
          enabled: true,
          topic: maintenanceTopic.id,
        },
      },

      enableKubernetesAlpha: false,
      enableL4IlbSubsetting: false,
      enableLegacyAbac: false,
      enableTpu: false,
    },
    { provider: gcpProvider, dependsOn: [nat], protect: true }
  );
so, if I try to change `pool-1`’s
machineType
, or if I add another pool (
pool-2
), then pulumi tries to recreate the entire cluster I really don’t want that to happen
b
that's a limitation on the Google side, modifying the default node group will replace the cluster. You need to use this resource: https://www.pulumi.com/registry/packages/gcp/api-docs/container/nodepool/
m
Basically I want pulumi to treat the change as an
update
instead of a
recreate
b
it's generally recommended to make the default node pool very small, and use explicit nodepools
m
hmmmm ok
ok i’ll check that out, thank you! 🙏