How do I created a K8s cluster and also deploy hel...
# kubernetes
How do I created a K8s cluster and also deploy helm Releases to it in the same stack? basically something like
Copy code
const cluster = new KubernetesCluster(...)

const app1 = new Helm.V3.Release(..., ..., { cluster })
Without this it requires I first comment line 2, deploy just the cluster, save the kubeconfig, then uncomment line 2 and deploy.
You should be able to create an explicit Kubernetes provider (instead of using the default) and pass it the Kubeconfig generated from creating the cluster. Here’s an example of how that’s done (in TypeScript): Note that, in this particular case, the Kubeconfig is being pulled from another stack as a stack reference, but the basic idea remains the same if you are creating the cluster in the same stack.
Oh, and then you need to specify the explicit provider when you create your Helm/Kubernetes objects. Later on in that same file I linked above you can see how to do that.
@salmon-account-74572 I had previously tried to do something like this
Copy code
const provider = new k8s.Provider('base-kube-provider', {
    kubeconfig: JSON.stringify(cluster.kubeConfigs[0])
  }, {
    dependsOn: cluster
And then use that provider in the Helm Release opts, but I get an error essentially saying no kubeConfig was specified
do I have to explicitly use getOutput on the cluster?
The code I shared with you works without
, but that may depend on how you are creating the cluster. Can you share your code? If the cluster is being created in the same Pulumi program, then no, you don’t need
(that’s only for stack references---when the cluster is getting created in a different Pulumi stack/project).
Copy code
import * as digitalocean from '@pulumi/digitalocean'
import * as k8s from '@pulumi/kubernetes'

// create cluster
const cluster = new digitalocean.KubernetesCluster('base-kube-cluster',
      //...normal declared cluster with sensitive values

// new cluster does not exist in local kubesettings yet
const provider = new k8s.Provider('base-kube-provider', {
    kubeconfig: cluster.kubeConfigs[0].apply(JSON.stringify)
  }, {
    dependsOn: cluster

// create db in new cluster
const db = new k8s.helm.v3.Release('crdb', {
    name: 'crdb',
    chart: 'cockroachdb',
    repositoryOpts: {
      repo: '<>',
    values: {
      fullnameOverride: 'crdb',
      'single-node': true,
      statefulset: {
        replicas: 1
  }, {dependsOn: cluster, provider})
@salmon-account-74572 ^
Ah, this is DO. I believe this issue may be part of what you’re experiencing:
Hmm. That seems like a slightly different issue. It’s not up to 7 days. My issue is that I can’t seem to extract the kubeconfig from the recently created cluster, so pulumi tries to use my local kubeConfig…which would obviously fail.
Using DO, I would do a
doctl kubernetes cluster kubeconfig save base-kube
which would update my local kubeConfig after the cluster has been deployed. and then all other subsequent commands work fine
But I’m stuck at the part where I can dynamically reference the kubeConfig of the new cluster when creating the crdb chart in the same flow
Thanks for your help so far!
Of course, I’m happy to try to help. For troubleshooting purposes, can you comment out everything except creating the cluster, and then add the Kubeconfig as a stack output? This will help us ensure that you’re referencing everything correctly when creating the explicit provider.
After you run a
pulumi up
with only the cluster being created, then run
pulumi stack output kubeconfig
(or whatever you called the stack output) and see if that looks correct/appropriate/functional.
Maybe I’m just missing something with the code. Could you show me how to pass a value from one resource to t another with output?
In such a way that it works in preview mode
I do see the cubeconfig string when I
But I want to pass this string as kubeconfig to the new provider
In reviewing the API docs, it looks like maybe you need to use
when defining the new provider?
Ah perfect! Thank you so much
I’m loving pulumi at this rate!
Awesome, very glad to hear that!