My Pulumi creates a new GKE cluster on every run. ...
# kubernetes
k
My Pulumi creates a new GKE cluster on every run. Anyone seen this behavior? I saw someone mention adding networking config was a workaround for them but the params that they mentioned don’t seem to work in the current version of the libraries.
h
Does it tear down the old cluster too? As in, is it replacing or just creating the new cluster?
k
I believe it says “recreate” when its doing it and it just creates a new cluster. The old clusters I have to go back and manually delete. When I do a pulumi destroy, it tells me that its set to not delete clusters but not sure if that is related to the actual issue…
I’ve added network and subnetwork to the container.NewCluster call and am trying it again after removing all of the clusters.
Copy code
Type                       Name                        Status                         Info
     pulumi:pulumi:Stack        dimo-node-dimo-dev          running...                     I1117 10:59:51.944230   47822 schema.go:966] Terraform output destinationRanges = {[]}
 +   ├─ gcp:compute:Network     dimo-dev-401815-network     created (11s)
 +   ├─ gcp:compute:Subnetwork  dimo-dev-401815-subnetwork  created (11s)
 +   ├─ gcp:compute:Firewall    dimo-dev-401815-firewall    created (11s)
 ++  └─ gcp:container:Cluster   dimo-dev-401815             creating replacement (91s)..   [diff: ~network,nodeConfig,subnetwork]
Running in more verbose mode and it looks like its replacing because of the nodeConfig diff. I changed the network and subnetwork so that may be legitimate. I will wait until this run finishes then try again to see if I end up with a third cluster.
Yep, now a third cluster:
Copy code
Type                      Name                Status                          Info
     pulumi:pulumi:Stack       dimo-node-dimo-dev  running                         I1117 11:08:55.779795   58126 schema.go:562] Terraform input enable_multi_networking = false
 ++  └─ gcp:container:Cluster  dimo-dev-401815     creating replacement (107s)     [diff: ~nodeConfig]
Something wrong with my container.NewCluster config?
Copy code
cluster, err := container.NewCluster(ctx, projectName, &container.ClusterArgs{
		InitialNodeCount: <http://pulumi.Int|pulumi.Int>(1),
		//RemoveDefaultNodePool: pulumi.Bool(true),
		Location:         pulumi.String(location),
		MinMasterVersion: pulumi.String("latest"),
		Network:          Network.ID(),
		Subnetwork:       Subnetwork.ID(),
		NodeConfig: &container.ClusterNodeConfigArgs{
			MachineType: pulumi.String("n1-standard-2"),
			DiskSizeGb:  <http://pulumi.Int|pulumi.Int>(30),
			OauthScopes: pulumi.StringArray{
				pulumi.String("<https://www.googleapis.com/auth/compute>"),
			},
			Preemptible: pulumi.Bool(false),
		},
	})
Here’s the diff. Looks like node config. Not sure why it is changing every run…
Copy code
~ nodeConfig: {
          ~ oauthScopes: [
              - [0]: "<https://www.googleapis.com/auth/monitoring>"
              - [1]: "<https://www.googleapis.com/auth/logging.write>"
            ]
        }

                  ~ nodeLocations           : [
                        [0]: "us-central1-b"
                      ~ [1]: "us-central1-c" => "us-central1-f"
                        [2]: "us-central1-a"
                    ]