Hello everyone, I am trying to use Pulumi to manag...
# getting-started
b
Hello everyone, I am trying to use Pulumi to manage our k8s clusters. I don't understand how I can use Helm Releases properly with Pulumi - it hangs forever, even though I follow several examples. One question I have is, how do I pass my kubeconfig/cluster provider (created earlier in my sample stack) to the helm release? For regular Deployment/Services/... I can use the
opts
argument to pass the cluster provider, but I don't see it documented on Helm Release class
c
Not sure if this helps but this shows how it can be done with C# if I understand right https://github.com/pulumi/examples/blob/master/azure-cs-aks-helm/MyStack.cs
b
I tried this approach, but it just stuck forever on pulumi up
f
hi @broad-parrot-139, I managed to get it work like this (I'm using TS).
Copy code
const certManagerRelease = new k8s.helm.v3.Release(
      'cert-manager', 
      {
        chart: "cert-manager",
        version: version,
        repositoryOpts: {
          repo: repoUrl
        },
        values: { 'installCRDs': true },
        namespace: 'cert-manager'
      },
      {
        parent: this,
        dependsOn: [cluster],
        provider: cluster.provider
      }
    );
Perhaps you want to make sure that pulumi is fetching the chart correctly, which you can test by setting the parameter
{verify: true}
in args. As for the provider, you can pass it on the opts. I found these docs that are really insightful: https://www.pulumi.com/docs/tutorials/kubernetes/gke/
Copy code
export const kubeconfig = pulumi.
    all([ cluster.name, cluster.endpoint, cluster.masterAuth ]).
    apply(([ name, endpoint, masterAuth ]) => {
        const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
        return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
    });

// Create a Kubernetes provider instance that uses our cluster from above.
const clusterProvider = new k8s.Provider(name, {
    kubeconfig: kubeconfig,
});
This works like a charm on GCP. Basically, what you want to do is to implement a provider factory that you can pass to your Helm.Release resource. Hope this helps,
b
Thanks, will try it out!