Hello all, ```Diagnostics: pulumi:pulumi:Stack ...
# kubernetes
f
Hello all,
Copy code
Diagnostics:
  pulumi:pulumi:Stack (usermanagement-svc-deploy-usermanagement-svc-dev):
    W0701 10:41:26.074241   57840 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
    To learn more, consult <https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke>
I’m getting this warning because the GKE cluster kubeconfig is created and exported as output while back, it there a way to force Pulumi to regenerate it with the new GKE Auth plugin? I’ve setup locally the new auth plugin and kubectl is working fine,
USE_GKE_GCLOUD_AUTH_PLUGIN
is set as well but it is only for local kubectl/terminal usage. Any idea what has to be done on Pulumi side in order not to break GKE connection to the cluster while updating/migrating to K8s v1.25?
b
@freezing-quill-32178 you're exprorting a Kubeconfig from one stack to another? can you share your code?
you can do a targetted update for a downstream stack
f
Yes, the kubeconfig is created with some example code 2 years back (no idea whether there is another way now…
Copy code
// Manufacture a GKE-style kubeconfig. Note that this is slightly "different"
// because of the way GKE requires gcloud to be in the picture for cluster
// authentication (rather than using the client cert/key directly).
export const kubeconfig = pulumi
  .all([cluster.name, cluster.endpoint, cluster.masterAuth])
  .apply(([name, endpoint, masterAuth]) => {
    const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;

    return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
  });
And it is used in various stacks for deployment through a stack reference provided to the k8s.Provider
Copy code
const k8sProvider = new k8s.Provider("propeller-api-services", {
  kubeconfig: infraStack.getOutput("kubeconfig"),
});
even tho i have setup locally the new GKE Auth plugin it displays the warning… same deal in CI/CD without the plugin installed… I guess pulumi is using the gcp /k8s go client libs to connect to the cluster/k8s.Provider and not kubectl?
b
Pulumi is using the kubeconfig you're building in that code
You need to update that
👍 1
f
ok, I just compared old and new kubeconfig for a GKE cluster… the config differs in the user.auth-provider part…
Copy code
OLD: auth-provider: gcp
user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
Copy code
NEW: gke-gcloud-auth-plugin
user:
    exec:
      apiVersion: <http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>
      args: null
      command: gke-gcloud-auth-plugin
      env: null
      installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
        <https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke>
      interactiveMode: IfAvailable
      provideClusterInfo: true
b
Yea, so update the kubeconfig in your program
f
Great, thanks… is there some other better way now instead of constructing the kubeconfig by yourself?
b
nope, you still need to do that
👀 1