Hello, I have a question regarding deploying helm ...
# general
h
Hello, I have a question regarding deploying helm charts with Pulumi. I understand that tiller does not need to be installed as Pulumi expands the chart and submits it to k8s. That being said, I'm trying to deploy an operator and it's components to k8s and can do so manually with helm + tiller but if I try to use Pulumi, the second component fails to deploy. (Rest in comments)
I'm trying to deploy the Confluent Operator via Pulumi https://docs.confluent.io/current/installation/operator/co-deployment.html I'm able to deploy the operator and manager here https://docs.confluent.io/current/installation/operator/co-deployment.html#step-1-install-co-long But once I try to deploy the rest of the components, they don't seem to work without tiller.
Copy code
helm install \
-f ./providers/aws.yaml \
--name zookeeper \
--namespace operator \
--set zookeeper.enabled=true \
./confluent-operator
Is this helm/operator setup different than others out there? Does Pulumi support the
name
option in the
helm install --name zookeeper
command?
zookeeper pods stay pending because there are unbound persistent volume claims, which tells me that the resources are not able to be created from the expanded charts
What I’ve used so far to deploy is this:
Copy code
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
The
confluent-operator
path is a folder containing all the files in the download from confluent’s site.
awsConfuent.config
is a json’d version of
values.yaml
so that I can just pass it an object
Copy code
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  32s (x9 over 4m26s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 8 times)
c
It’s hard to say what’s going on here without knowing more about the chart specifically.
Generally when this happens it’s because the chart depends on Tiller, which is generally a bug, since Tiller won’t exist in Helm 3.
h
So you can write charts that explicitly depend on Tiller?
c
Oh yes.
h
Ah
What do you recommend as a workaround in that case?
c
We are semantically equivalent to
helm template
I’d start by seeing if anyone in the issues is having trouble with
helm template
Do you know why the PVCs are unbound?
h
Good suggestion. And no, unfortunately. I’m also just wondering if I’m representing the steps properly in Pulumi. The steps say to use the same chart but with different
--name
options to create different deployments. Is that represented by the same
chart
instance in pulumi? Or multiple instances of the same chart with different release names?
c
hmm, sorry, which steps?
and when you say “different deployments,” you mean you want multiple instances of the chart, right?
h
Copy code
helm install \
-f ./providers/aws.yaml \
--name operator \
--namespace operator \
--set operator.enabled=true \
./confluent-operator
plus
Copy code
helm install \
-f ./providers/aws.yaml \
--name zookeeper \
--namespace operator \
--set zookeeper.enabled=true \
./confluent-operator
equals
Copy code
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});

awsConfluent.config.zookeeper.enabled = true;
const kafkaZookeeper = new k8s.helm.v2.Chart("confluent-zookeeper", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
?
And
awsConfluent.config
is just a json representation of
values.yaml
c
and `preview`—does it show the PVCs being created?
can you paste the output of that?
h
Sure, one sec
Ok, so here’s the output of this code:
Copy code
awsConfluent.config.operator.enabled = true;
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
c
are the PVCs in the 6 unchanged resources?
if not, looks like they’re not being created
h
No, it’s namespaces and k8sProviders
Then if I do this I get an error:
Copy code
awsConfluent.config.operator.enabled = true;
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});

awsConfluent.config.zookeeper.enabled = true;
const kafkaZookeeper = new k8s.helm.v2.Chart("confluent-zookeeper", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
c
let me take a look at the chart code real quick…
ah, can you send me a link to the exact version of the code you’re running?
the chart, I mean
Finally, if I instead use one chart instance and then just enable zookeeper, I get this:
Copy code
awsConfluent.config.operator.enabled = true;
awsConfluent.config.zookeeper.enabled = true;
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
c
the ZK pods are not binding because the PVC is missing, right?
I don’t see PVCs for ZK in the chart… hmm…
actually I don’t see PVs, period.
that’s odd…
h
Interesting, ya. There’s just a storage class
I wonder if it’s up to the operator to make the claim
c
I’m not quite sure what to say about that, I’m sure it makes sense, but I don’t know how.
can you link me to the repo so I can take a look at the issues real quick?
no CRDs, either. custom controllers?
h
Which repo?
c
the kafka-operator repo, whichever one you’re using
docs you link to don’t really talk about PVs either.
are you 100% sure that Helm successfully deployed this?
h
Oh ya, I ran the steps myself
c
It doesn’t check whether resources are initialized, etc
I mean: did it fully initialize?
h
Works from the CLI using tiller. Yup
And I don’t think I can link you to my repo. It’s private in my org. Lemme zip it for you
c
after deploying the charts can you run
kubectl get pvc --all-namespaces
and
kubectl get pv --all-namespaces
I just want to see if there are any PVs
oh
what are these “providers” for?
it looks like they’re doing some weird PV stuff
h
The providers are just different value yaml files for each cloud provider
c
oh, sorry, I meant that I’d like to look at the issues list
on, e.g., github
h
Ohh haha
I don’t think it’s public either?
c
ah
the kafka chart stuff isn’t public?
hmm
h
Nope. They just provide the zip and the chart is not published either. So, gotta download the files then follow the steps and reference the chart locally. Can check the files into your own version control though
c
weird.
it looks like they’re saying to
helm install
via these “providers” though
h
Ya
That’s what has been confusing me
c
but you’re just doing the actual charts
so what exactly does helm do when you run
helm install -f
with a provider?
does that work with
helm template
?
can you run the same command but switch
install
for
template
?
h
I haven’t tried helm template from the CLI, ya can do that now
c
it should be more or less the same
h
Ok
Also, don’t install tiller, right?
c
template
just expands the chart
h
That was with
Copy code
helm template \
-f ./providers/aws.yaml \
--name operator \
--namespace operator \
--set operator.enabled=true \
./confluent-operator
c
so it does work
I’m not positive what will happen, but… can you try passing the provider YAML into the
new Chart
?
that said, this output is super confusing
h
I think I am? Here is my
awsProvider.ts
I’m passing that to the
values
option for the
new Chart
c
I meant something like:
Copy code
new k8s.helm.v2.Chart("confluent-operator", {
    path: "providers/aws.yaml",
h
It’s basically the
aws.yaml
provider file
c
also, when you do a `helm install`… can you show the YAML for the PVCs/PVs?
might tell us what made them
h
output of
kubectl describe zookeeper zookeeper -n operator
c
it’s still creating it looks like?
I might have to just try to do this myself
and report back the fix
h
It’s started now
c
alright. give me a couple hours to finish some other stuff and I’ll give it a shot myself
do I have everything I need to stand it up myself?
h
Thanks. I appreciate the help. All you need is a k8s cluster.
And that zip plus instructions
c
alright, I am reasonably confident I can figure it out
h
Let me know if you need anything else
c
I will, thanks
h
I think I got it working
I had
st1
as the storage type in my json but it was
gp2
in the yaml file. Not sure why that would make a difference
From pulumi
c
ah
great!
h
I'm achieving my results by deploying the operator then changing to
zookeeper.enabled=true
after the first run then running it again as there are no checks to make sure the previous dependency has completed before proceeding. Can I write Pulumi health checks to prevent zookeeper from proceeding before the operator has finished? Or is that something I need to write into the helm charts?
Hey @creamy-potato-29402! I have a follow up question related to the same helm files as this discussion but a different issue. It’s regarding the creating of secrets when enabling multiple components at once. If you go through the setup by creating the operator, zookeeper, and kafka components separately then once I enable multiple components (e.g. any combination of schemaregistry, ksql, controlcenter, etc) I get an error saying one of the components’
-apikeys
was already created; however, if I create the individual components one by one I do not get the error.
When I check k8s, I notice that the
-apikeys
secret was indeed created but is empty. If I delete it and re run pulumi, it creates successfully with data.
Not sure if this is a Pulumi issue where it’s not tracking the secret resource properly or if there is something going on with the helm charts where it’s initializing the secret then going back in and adding data and when it adds data, Pulumi is seeing it as a second instance of the resource
The file for the secret is pretty standard across all components as so:
Copy code
apiVersion: v1
kind: Secret
metadata:
  {{- include "confluent-operator.labels" . }}
  namespace: {{ .Release.Namespace }}
  name: {{ .Values.name }}-apikeys
type: Opaque
data:
  apikeys.json : {{ include "confluent-operator.apikeys" . | b64enc }}
Again, this only happens when multiple secrets are being created during a single pulumi run. Not when they get created by themselves