Hi everybody, This is a rather basic question. An...
# getting-started
Hi everybody, This is a rather basic question. Any ideas will be greatly appreciated. I'm using a Helm Chart to deploy a cert-manager resource into a GCP cluster. Resources must be provisioned in a hierarchical order (cluster first, then cert-manager). So far so good, as it is possible to determine dependencies using the ComponentResourceOptions. So here's my problem: I wish to use the kubernetes.helm.v3.Chart class to create the cert-manager object, and I need to pass somehow the newly provisioned cluster as an argument to indicate which cluster cert-manager belongs to. Otherwise, my current context (the cluster kubectl is pointing to) you'll be used to host the cert-manager resource. How can I accomplish without manual intervention ? Thanks!
You would pass this in as the provider argument
Creating a provider from the cluster you just created or based on a kubeconfig you have
👍 1
I'm gonna look into that. Thanks!
@bored-table-20691 Do you have some code you could show for this?
I'm somehow uncomfortable with the idea of spawning a child process to let my GCP client know which cluster it's supposed to point to. It seems to me that this approach is too imperative, isn't it?
Copy code
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const name = "helloworld";

// Create a GKE cluster
const engineVersion = gcp.container.getEngineVersions().then(v => v.latestMasterVersion);
const cluster = new gcp.container.Cluster(name, {
    initialNodeCount: 2,
    minMasterVersion: engineVersion,
    nodeVersion: engineVersion,
    nodeConfig: {
        machineType: "n1-standard-1",
        oauthScopes: [

// Export the Cluster name
export const clusterName = cluster.name;

// Manufacture a GKE-style kubeconfig. Note that this is slightly "different"
// because of the way GKE requires gcloud to be in the picture for cluster
// authentication (rather than using the client cert/key directly).
export const kubeconfig = pulumi.
    all([ cluster.name, cluster.endpoint, cluster.masterAuth ]).
    apply(([ name, endpoint, masterAuth ]) => {
        const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
        return `apiVersion: v1
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
- name: ${context}
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

// Create a Kubernetes provider instance that uses our cluster from above.
const clusterProvider = new k8s.Provider(name, {
    kubeconfig: kubeconfig,