How do I created a K8s cluster and also deploy hel...
# kubernetes
b
How do I created a K8s cluster and also deploy helm Releases to it in the same stack? basically something like
Copy code
const cluster = new KubernetesCluster(...)

const app1 = new Helm.V3.Release(..., ..., { cluster })
Without this it requires I first comment line 2, deploy just the cluster, save the kubeconfig, then uncomment line 2 and deploy.
s
You should be able to create an explicit Kubernetes provider (instead of using the default) and pass it the Kubeconfig generated from creating the cluster. Here’s an example of how that’s done (in TypeScript): https://github.com/pulumi/zephyr-app/blob/blog/multi-project/infra/index.ts#L15-L16 Note that, in this particular case, the Kubeconfig is being pulled from another stack as a stack reference, but the basic idea remains the same if you are creating the cluster in the same stack.
Oh, and then you need to specify the explicit provider when you create your Helm/Kubernetes objects. Later on in that same file I linked above you can see how to do that.
b
@salmon-account-74572 I had previously tried to do something like this
Copy code
const provider = new k8s.Provider('base-kube-provider', {
    kubeconfig: JSON.stringify(cluster.kubeConfigs[0])
  }, {
    dependsOn: cluster
  })
And then use that provider in the Helm Release opts, but I get an error essentially saying no kubeConfig was specified
do I have to explicitly use getOutput on the cluster?
s
The code I shared with you works without
JSON.stringify
, but that may depend on how you are creating the cluster. Can you share your code? If the cluster is being created in the same Pulumi program, then no, you don’t need
getOutput
(that’s only for stack references---when the cluster is getting created in a different Pulumi stack/project).
b
Copy code
import * as digitalocean from '@pulumi/digitalocean'
import * as k8s from '@pulumi/kubernetes'

// create cluster
const cluster = new digitalocean.KubernetesCluster('base-kube-cluster',
    {
      //...normal declared cluster with sensitive values
    }
)

// new cluster does not exist in local kubesettings yet
const provider = new k8s.Provider('base-kube-provider', {
    kubeconfig: cluster.kubeConfigs[0].apply(JSON.stringify)
  }, {
    dependsOn: cluster
  })

// create db in new cluster
const db = new k8s.helm.v3.Release('crdb', {
    name: 'crdb',
    chart: 'cockroachdb',
    repositoryOpts: {
      repo: '<https://charts.cockroachdb.com>',
    },
    values: {
      fullnameOverride: 'crdb',
      'single-node': true,
      statefulset: {
        replicas: 1
      }
    },
  }, {dependsOn: cluster, provider})
@salmon-account-74572 ^
s
Ah, this is DO. I believe this issue may be part of what you’re experiencing: https://github.com/pulumi/pulumi-digitalocean/issues/312
b
Hmm. That seems like a slightly different issue. It’s not up to 7 days. My issue is that I can’t seem to extract the kubeconfig from the recently created cluster, so pulumi tries to use my local kubeConfig…which would obviously fail.
image.png
Using DO, I would do a
doctl kubernetes cluster kubeconfig save base-kube
which would update my local kubeConfig after the cluster has been deployed. and then all other subsequent commands work fine
But I’m stuck at the part where I can dynamically reference the kubeConfig of the new cluster when creating the crdb chart in the same flow
Thanks for your help so far!
s
Of course, I’m happy to try to help. For troubleshooting purposes, can you comment out everything except creating the cluster, and then add the Kubeconfig as a stack output? This will help us ensure that you’re referencing everything correctly when creating the explicit provider.
After you run a
pulumi up
with only the cluster being created, then run
pulumi stack output kubeconfig
(or whatever you called the stack output) and see if that looks correct/appropriate/functional.
b
Maybe I’m just missing something with the code. Could you show me how to pass a value from one resource to t another with output?
In such a way that it works in preview mode
I do see the cubeconfig string when I
cluster.kubeConfigs[0].apply(v=>_*console*_.log('out',v))
But I want to pass this string as kubeconfig to the new provider
s
In reviewing the API docs, it looks like maybe you need to use
cluster.kubeConfigs[0].rawConfig
when defining the new provider?
b
Ah perfect! Thank you so much
I’m loving pulumi at this rate!
s
Awesome, very glad to hear that!