This message was deleted.
# kubernetes
s
This message was deleted.
b
@freezing-quill-32178 you're exprorting a Kubeconfig from one stack to another? can you share your code?
you can do a targetted update for a downstream stack
f
Yes, the kubeconfig is created with some example code 2 years back (no idea whether there is another way now…
Copy code
// Manufacture a GKE-style kubeconfig. Note that this is slightly "different"
// because of the way GKE requires gcloud to be in the picture for cluster
// authentication (rather than using the client cert/key directly).
export const kubeconfig = pulumi
  .all([cluster.name, cluster.endpoint, cluster.masterAuth])
  .apply(([name, endpoint, masterAuth]) => {
    const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;

    return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
  });
And it is used in various stacks for deployment through a stack reference provided to the k8s.Provider
Copy code
const k8sProvider = new k8s.Provider("propeller-api-services", {
  kubeconfig: infraStack.getOutput("kubeconfig"),
});
even tho i have setup locally the new GKE Auth plugin it displays the warning… same deal in CI/CD without the plugin installed… I guess pulumi is using the gcp /k8s go client libs to connect to the cluster/k8s.Provider and not kubectl?
b
Pulumi is using the kubeconfig you're building in that code
You need to update that
👍 1
f
ok, I just compared old and new kubeconfig for a GKE cluster… the config differs in the user.auth-provider part…
Copy code
OLD: auth-provider: gcp
user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
Copy code
NEW: gke-gcloud-auth-plugin
user:
    exec:
      apiVersion: <http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>
      args: null
      command: gke-gcloud-auth-plugin
      env: null
      installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
        <https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke>
      interactiveMode: IfAvailable
      provideClusterInfo: true
b
Yea, so update the kubeconfig in your program
f
Great, thanks… is there some other better way now instead of constructing the kubeconfig by yourself?
b
nope, you still need to do that
👀 1