I updated my setup from 0.17.21 to 1.2.0 (and appr...
# general
q
I updated my setup from 0.17.21 to 1.2.0 (and appropriate plugins). Now I'm trying to raise
minMasterVersion
of my GCP k8s cluster, which properly shows it needs updating (as opposed to recreating). However, I then have a custom pulumi k8s provider stitched together based on
pulumi.all([ cluster.name, cluster.endpoint, cluster.masterAuth ])
And all k8s resources are using this explicit
k8s.Provider
, and they're all showing
replace
strategy if I decide to raise my minMasterVersion. I don't remember this happening with 0.17.21, which has me very worried. I wanted to use the selective update with
--target urn
, but this has not landed in 1.2.0, despite the changelog indicating such. Did anyone face this? Can I expect all my stuff using this
k8s.Provider
to remain the same as long as that provider is created/configured the same at runtime?
Copy code
~  gcp:container:Cluster main update [diff: ~minMasterVersion]
 ++ pulumi:providers:kubernetes main create replacement [diff: ~kubeconfig]
 +- pulumi:providers:kubernetes main replace [diff: ~kubeconfig]
 ++ gcp:container:NodePool main create replacement [diff: ~cluster]
 +- gcp:container:NodePool main replace [diff: ~cluster]
 ++ kubernetes:core:Secret cloudflare-account create replacement [diff: -metadata~provider]
an excerpt from
preview --non-interactive
w
Can you expand the diff to show what exactly is changing for not the provider and the kubernetes resources?
q
Here it is, sanitized slightly:
Copy code
~ gcp:container/cluster:Cluster: (update)
        [id=main-6394239]
        [urn=urn:pulumi:production::project::gcp:container/cluster:Cluster::main]
        [provider=urn:pulumi:production::project::pulumi:providers:gcp::default_1_2_0::992b3bde-2c57-469e-bbe3-700071901214]
      ~ minMasterVersion: "1.12.7-gke.7" => "1.13.10-gke.0"
        +-pulumi:providers:kubernetes: (replace)
            [id=1a13cc79-0197-4787-babb-38f06a662c6f]
            [urn=urn:pulumi:production::project::gcp:container/cluster:Cluster$pulumi:providers:kubernetes::main]
          ~ kubeconfig: "long valid kubeconfig from previous deploy" => output<string>
        +-gcp:container/nodePool:NodePool: (replace)
            [id=europe-west1-d/main-6394239/main-5285878]
            [urn=urn:pulumi:production::project::gcp:container/cluster:Cluster$gcp:container/nodePool:NodePool::main]
            [provider=urn:pulumi:production::project::pulumi:providers:gcp::default_1_2_0::992b3bde-2c57-469e-bbe3-700071901214]
          ~ cluster: "main-6394239" => output<string>
Because the new value is
output<string>
, it means it will be dynamically resolved at runtime, correct? But does the decision to update/replace happen at the same time too?
g
The k8s provider will force a replacement of resources if certain fields in the kubeconfig change. In this case, it looks like the kubeconfig is a computed value (pulumi.Output), which means we can’t tell during preview what’s going to change in the kubeconfig. Tracked down the relevant PR, and it looks like we take the conservative choice to schedule a replace of resources using that provider anytime the kubeconfig is a computed value. https://github.com/pulumi/pulumi-kubernetes/pull/577 I’m pretty sure that only applies to the preview, and won’t actually replace the resources for a version upgrade, but I’m not 100% on that right now.
q
To be sure of this, one could use
pulumi update --target specific-urn
just to update the cluster, right? Then, in the second update, these values would no longer be dynamic and I could see if kubeconfig remains the same or replacement strategy continues to be chosen? I think I'm going to update the version through GCP UI for now, and wait for 1.3.0
g
Yes, it’s the computed value causing the replacement. If the cluster has been upgraded already, it shouldn’t do that.
Ok, confirmed that upgrading the GKE version doesn’t actually cause the resources to be replaced. The preview is conservative about what might happen, but it should be safe to update the stack.
You can certainly upgrade it out of band or in a more targeted way depending on your risk tolerance
q
Thanks for following up, I will stay on the careful side since this is my production environment. I can't recreate a staging environment with that old version on GKE anymore, so I don't have a playground.