I am trying to install a couple helm charts. They ...
# kubernetes
b
I am trying to install a couple helm charts. They work beautifully well when using helm CLI i.e
Copy code
helm repo add cockroachdb <https://charts.cockroachdb.com/>
helm repo add zitadel <https://charts.zitadel.com>

# Install CockroachDB. Contains helm hooks
helm install crdb cockroachdb/cockroachdb \
  --set fullnameOverride=crdb \
  --set single-node=true \
  --set statefulset.replicas=1

# Install ZITADEL
helm install sifauth zitadel/zitadel \
  --set zitadel.masterkey="1FhB0hgbG9sU5Yzw6NRknfstbQbuADNP" \
  --set zitadel.configmapConfig.ExternalSecure=false \
  --set zitadel.configmapConfig.TLS.Enabled=false \
  --set zitadel.secretConfig.Database.cockroach.User.Password="admin" \
  --set replicaCount=1
However when I try to use Pulumi Helm Releases with the same values, the release consistently fails. What am I missing?
Copy code
const provider = new k8s.Provider('base-kube-provider', {
    kubeconfig: cluster.kubeConfigs[0].rawConfig
  }, { dependsOn: [cluster] })

new k8s.helm.v3.Release('crdb', {
    name: 'crdb',
    chart: 'cockroachdb',
    waitForJobs: true,
    repositoryOpts: {
      repo: '<https://charts.cockroachdb.com>',
    },
    values: {
      fullnameOverride: 'crdb',
      'single-node': true,
      statefulset: {
        replicas: 1
      }
    },
  }, {dependsOn: [provider], provider})

new k8s.helm.v3.Release('sifauth', {
    name: 'sifauth',
    chart: 'zitadel',
    waitForJobs: true,
    repositoryOpts: {
      repo: '<https://charts.zitadel.com>',
    },
    values: {
      replicaCount: 1,
      zitadel: {
        masterkey: authConfig.getSecret('masterKey'),
        configmapConfig: {
          ExternalSecure: false,
          TLS: {
            Enabled: false
          },
          secretConfig: {
            Database: {
              cockroach: {
                User: {
                  Password: authConfig.getSecret('dbPassword')
                }
              }
            }
          }
        }
      },
    },
  }, {dependsOn: [crdb], provider})
Error:
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (crdb):
    warning: Helm release "crdb" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: timed out waiting for the condition
    error: 1 error occurred:
        * Helm release "default/crdb" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release default/crdb: timed out waiting for the condition
Copy code
# kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
crdb-0   0/1     Running   0          9m10s
f
Have you tried increasing the timeout? We have TONs of issues with the kubernetes
Release
resource in Pulumi, especially on our ephemeral environments where we are constantly building fresh clusters. Our issues are primarily around not being able to find the CRDs and resources in time, though
b
@full-eve-52536 The default timeouts are long enough as it is VS when I install things with helm CLI
TBH, I am almost giving up and just using Command.local to create command script resources that create, update, and delete my various helm charts.
f
We use the
atomic: true
and
dependencyupdate: True
directives and that has help us a little bit, not sure that will help your specific use case though.
b
Thankfully I’m only using k8s to host external repo projects. The command local scripts can update values from a file on each
update
run.
c
@bitter-father-26598 the issue is almost certainly related to the CRDs not being ready yet, but you can generally determine this using the helm cli to investigate the error modes.
dependsOn: []
can be a little tricky - it just sets up the dependency graph, not a timing aspect. I think you want to use
crdb.status
but I’m not 100% sure on that.
In my experimentation with Command.local you’ll almost certainly run into other blockers. For example, you cannot provide a k8s provider to Command.local, so you’ll end up needing to export the kubectl context into the environment for Command.local - and that drives up the complexity a fair amount.
f
@curved-kitchen-24115 Do you have a solution for getting around the issue when the CRDs are not ready yet? We just end up retrying the entire
pulumi up
c
@full-eve-52536 my first port of call would be to try
crdb.status
in the dependsOn… I read that somewhere — but I cannot remember where or whether it worked. In our case the CRDs we create occur within the <timeout> window that helm.release waits (5m maybe?) so it reconciles itself.
dependsOn broadly says: “wait until this object exists”. In the example above
crdb
exists when it is defined, so that passes the dependsOn check immediately. So you want to depend on something that takes time. I think the rationale is that
status
isn’t ready until the install has occurred.
but I want to caveat this with: I’ve not tried this. I’m just hypothesizing around what I do understand about
dependsOn
.
f
Gotcha. So you install your CRDs as part of a separate step? Most of the
Release
resources that we use end up installing the CRDs on their own
I guess I'm failing to understand what
crbd.status
is and where it's coming from
c
There are a couple of charts that we use which have their CRDs broken out. This is because helm cannot upgrade CRDs typically.
@full-eve-52536 it’s an output of
helm.Release
; additionally it’s Resource itself, so you can
dependsOn
it
f
Ohhhh I understand now. I was confused at first because I thought
crbd
was it's own baked in resource in Pulumi
Thank you for explaining that. That's an interesting theory we'll have to try. We install about 5 or 6 helm charts using
Release
as part of our stack and almost everytime we run
pulumi up
We get
the server could not find the requested resource
c
sorry, correction above.
helm.Release.status
is an Output; and you can
dependsOn
those if I remember correctly
The whole CRD world is so powerful, but also really complex. Consider the example of: my-crd-package-v1 my-app-package-v1 if
my-app-package-v1
depends on CRDs defined in
my-crd-package-v1
, how can you upgrade to
my-crd-package-v2
without breaking
my-app-package-v1
? So, typically, you end up with
MyCRD-V1
and
MyCRD-v2
, etc, and then the app package can decide which one it wants to implement.
@full-eve-52536 excuse my sidebar thought! Regarding your timing issue, I’d start by installing the CRDs by hand to get a sense of how long helm spends at that stage. This may require a demo k8s cluster, etc. Perhaps updates of them take a while, etc, etc.
b
@curved-kitchen-24115 For the Command.local kubeconfig bit, you can actually create a seperate local.Command to save the raw kubeConfig, same way the k8s.Provider resource gets created.
f
@curved-kitchen-24115 Haha no worries! That's also a very good idea!
c
@bitter-father-26598 on that’s neat.
b
I have actually noticed that dependsOn acts sequentially accurate. Everything I sequence with it seems to only start building after its dependency marks itself as
created
.