https://pulumi.com logo
#general
Title
# general
h

handsome-actor-1155

09/25/2019, 6:35 PM
Hello, I have a question regarding deploying helm charts with Pulumi. I understand that tiller does not need to be installed as Pulumi expands the chart and submits it to k8s. That being said, I'm trying to deploy an operator and it's components to k8s and can do so manually with helm + tiller but if I try to use Pulumi, the second component fails to deploy. (Rest in comments)
I'm trying to deploy the Confluent Operator via Pulumi https://docs.confluent.io/current/installation/operator/co-deployment.html I'm able to deploy the operator and manager here https://docs.confluent.io/current/installation/operator/co-deployment.html#step-1-install-co-long But once I try to deploy the rest of the components, they don't seem to work without tiller.
Copy code
helm install \
-f ./providers/aws.yaml \
--name zookeeper \
--namespace operator \
--set zookeeper.enabled=true \
./confluent-operator
Is this helm/operator setup different than others out there? Does Pulumi support the
name
option in the
helm install --name zookeeper
command?
zookeeper pods stay pending because there are unbound persistent volume claims, which tells me that the resources are not able to be created from the expanded charts
What I’ve used so far to deploy is this:
Copy code
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
The
confluent-operator
path is a folder containing all the files in the download from confluent’s site.
awsConfuent.config
is a json’d version of
values.yaml
so that I can just pass it an object
Copy code
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  32s (x9 over 4m26s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 8 times)
c

creamy-potato-29402

09/25/2019, 7:48 PM
It’s hard to say what’s going on here without knowing more about the chart specifically.
Generally when this happens it’s because the chart depends on Tiller, which is generally a bug, since Tiller won’t exist in Helm 3.
h

handsome-actor-1155

09/25/2019, 7:49 PM
So you can write charts that explicitly depend on Tiller?
c

creamy-potato-29402

09/25/2019, 7:49 PM
Oh yes.
h

handsome-actor-1155

09/25/2019, 7:49 PM
Ah
What do you recommend as a workaround in that case?
c

creamy-potato-29402

09/25/2019, 7:50 PM
We are semantically equivalent to
helm template
I’d start by seeing if anyone in the issues is having trouble with
helm template
Do you know why the PVCs are unbound?
h

handsome-actor-1155

09/25/2019, 7:55 PM
Good suggestion. And no, unfortunately. I’m also just wondering if I’m representing the steps properly in Pulumi. The steps say to use the same chart but with different
--name
options to create different deployments. Is that represented by the same
chart
instance in pulumi? Or multiple instances of the same chart with different release names?
c

creamy-potato-29402

09/25/2019, 7:56 PM
hmm, sorry, which steps?
and when you say “different deployments,” you mean you want multiple instances of the chart, right?
h

handsome-actor-1155

09/25/2019, 8:00 PM
Copy code
helm install \
-f ./providers/aws.yaml \
--name operator \
--namespace operator \
--set operator.enabled=true \
./confluent-operator
plus
Copy code
helm install \
-f ./providers/aws.yaml \
--name zookeeper \
--namespace operator \
--set zookeeper.enabled=true \
./confluent-operator
equals
Copy code
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});

awsConfluent.config.zookeeper.enabled = true;
const kafkaZookeeper = new k8s.helm.v2.Chart("confluent-zookeeper", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
?
And
awsConfluent.config
is just a json representation of
values.yaml
c

creamy-potato-29402

09/25/2019, 8:05 PM
and `preview`—does it show the PVCs being created?
can you paste the output of that?
h

handsome-actor-1155

09/25/2019, 8:06 PM
Sure, one sec
Ok, so here’s the output of this code:
Copy code
awsConfluent.config.operator.enabled = true;
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
c

creamy-potato-29402

09/25/2019, 8:14 PM
are the PVCs in the 6 unchanged resources?
if not, looks like they’re not being created
h

handsome-actor-1155

09/25/2019, 8:15 PM
No, it’s namespaces and k8sProviders
Then if I do this I get an error:
Copy code
awsConfluent.config.operator.enabled = true;
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});

awsConfluent.config.zookeeper.enabled = true;
const kafkaZookeeper = new k8s.helm.v2.Chart("confluent-zookeeper", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
c

creamy-potato-29402

09/25/2019, 8:15 PM
let me take a look at the chart code real quick…
ah, can you send me a link to the exact version of the code you’re running?
the chart, I mean
Finally, if I instead use one chart instance and then just enable zookeeper, I get this:
Copy code
awsConfluent.config.operator.enabled = true;
awsConfluent.config.zookeeper.enabled = true;
const kafkaOperator = new k8s.helm.v2.Chart("confluent-operator", {
    path: "confluent-operator/helm/confluent-operator",
    namespace: namespace.kafka,
    values: awsConfluent.config
}, {dependsOn: [kafkaNamespace], providers: {kubernetes: k8sKafkaProvider}});
c

creamy-potato-29402

09/25/2019, 8:18 PM
the ZK pods are not binding because the PVC is missing, right?
I don’t see PVCs for ZK in the chart… hmm…
actually I don’t see PVs, period.
that’s odd…
h

handsome-actor-1155

09/25/2019, 8:19 PM
Interesting, ya. There’s just a storage class
I wonder if it’s up to the operator to make the claim
c

creamy-potato-29402

09/25/2019, 8:19 PM
I’m not quite sure what to say about that, I’m sure it makes sense, but I don’t know how.
can you link me to the repo so I can take a look at the issues real quick?
no CRDs, either. custom controllers?
h

handsome-actor-1155

09/25/2019, 8:22 PM
Which repo?
c

creamy-potato-29402

09/25/2019, 8:22 PM
the kafka-operator repo, whichever one you’re using
docs you link to don’t really talk about PVs either.
are you 100% sure that Helm successfully deployed this?
h

handsome-actor-1155

09/25/2019, 8:23 PM
Oh ya, I ran the steps myself
c

creamy-potato-29402

09/25/2019, 8:23 PM
It doesn’t check whether resources are initialized, etc
I mean: did it fully initialize?
h

handsome-actor-1155

09/25/2019, 8:23 PM
Works from the CLI using tiller. Yup
And I don’t think I can link you to my repo. It’s private in my org. Lemme zip it for you
c

creamy-potato-29402

09/25/2019, 8:24 PM
after deploying the charts can you run
kubectl get pvc --all-namespaces
and
kubectl get pv --all-namespaces
I just want to see if there are any PVs
oh
what are these “providers” for?
it looks like they’re doing some weird PV stuff
h

handsome-actor-1155

09/25/2019, 8:26 PM
The providers are just different value yaml files for each cloud provider
c

creamy-potato-29402

09/25/2019, 8:27 PM
oh, sorry, I meant that I’d like to look at the issues list
on, e.g., github
h

handsome-actor-1155

09/25/2019, 8:27 PM
Ohh haha
I don’t think it’s public either?
c

creamy-potato-29402

09/25/2019, 8:27 PM
ah
the kafka chart stuff isn’t public?
hmm
h

handsome-actor-1155

09/25/2019, 8:29 PM
Nope. They just provide the zip and the chart is not published either. So, gotta download the files then follow the steps and reference the chart locally. Can check the files into your own version control though
c

creamy-potato-29402

09/25/2019, 8:29 PM
weird.
it looks like they’re saying to
helm install
via these “providers” though
h

handsome-actor-1155

09/25/2019, 8:30 PM
Ya
That’s what has been confusing me
c

creamy-potato-29402

09/25/2019, 8:30 PM
but you’re just doing the actual charts
so what exactly does helm do when you run
helm install -f
with a provider?
does that work with
helm template
?
can you run the same command but switch
install
for
template
?
h

handsome-actor-1155

09/25/2019, 8:31 PM
I haven’t tried helm template from the CLI, ya can do that now
c

creamy-potato-29402

09/25/2019, 8:31 PM
it should be more or less the same
h

handsome-actor-1155

09/25/2019, 8:31 PM
Ok
Also, don’t install tiller, right?
c

creamy-potato-29402

09/25/2019, 8:32 PM
template
just expands the chart
h

handsome-actor-1155

09/25/2019, 8:34 PM
That was with
Copy code
helm template \
-f ./providers/aws.yaml \
--name operator \
--namespace operator \
--set operator.enabled=true \
./confluent-operator
c

creamy-potato-29402

09/25/2019, 8:35 PM
so it does work
I’m not positive what will happen, but… can you try passing the provider YAML into the
new Chart
?
that said, this output is super confusing
h

handsome-actor-1155

09/25/2019, 8:37 PM
I think I am? Here is my
awsProvider.ts
I’m passing that to the
values
option for the
new Chart
c

creamy-potato-29402

09/25/2019, 8:38 PM
I meant something like:
Copy code
new k8s.helm.v2.Chart("confluent-operator", {
    path: "providers/aws.yaml",
h

handsome-actor-1155

09/25/2019, 8:39 PM
It’s basically the
aws.yaml
provider file
c

creamy-potato-29402

09/25/2019, 8:40 PM
also, when you do a `helm install`… can you show the YAML for the PVCs/PVs?
might tell us what made them
h

handsome-actor-1155

09/25/2019, 8:43 PM
output of
kubectl describe zookeeper zookeeper -n operator
c

creamy-potato-29402

09/25/2019, 8:44 PM
it’s still creating it looks like?
I might have to just try to do this myself
and report back the fix
h

handsome-actor-1155

09/25/2019, 8:45 PM
It’s started now
c

creamy-potato-29402

09/25/2019, 8:46 PM
alright. give me a couple hours to finish some other stuff and I’ll give it a shot myself
do I have everything I need to stand it up myself?
h

handsome-actor-1155

09/25/2019, 8:47 PM
Thanks. I appreciate the help. All you need is a k8s cluster.
And that zip plus instructions
c

creamy-potato-29402

09/25/2019, 8:47 PM
alright, I am reasonably confident I can figure it out
h

handsome-actor-1155

09/25/2019, 8:47 PM
Let me know if you need anything else
c

creamy-potato-29402

09/25/2019, 8:47 PM
I will, thanks
h

handsome-actor-1155

09/25/2019, 10:25 PM
I think I got it working
I had
st1
as the storage type in my json but it was
gp2
in the yaml file. Not sure why that would make a difference
From pulumi
c

creamy-potato-29402

09/26/2019, 2:27 AM
ah
great!
h

handsome-actor-1155

09/26/2019, 4:28 AM
I'm achieving my results by deploying the operator then changing to
zookeeper.enabled=true
after the first run then running it again as there are no checks to make sure the previous dependency has completed before proceeding. Can I write Pulumi health checks to prevent zookeeper from proceeding before the operator has finished? Or is that something I need to write into the helm charts?
Hey @creamy-potato-29402! I have a follow up question related to the same helm files as this discussion but a different issue. It’s regarding the creating of secrets when enabling multiple components at once. If you go through the setup by creating the operator, zookeeper, and kafka components separately then once I enable multiple components (e.g. any combination of schemaregistry, ksql, controlcenter, etc) I get an error saying one of the components’
-apikeys
was already created; however, if I create the individual components one by one I do not get the error.
When I check k8s, I notice that the
-apikeys
secret was indeed created but is empty. If I delete it and re run pulumi, it creates successfully with data.
Not sure if this is a Pulumi issue where it’s not tracking the secret resource properly or if there is something going on with the helm charts where it’s initializing the secret then going back in and adding data and when it adds data, Pulumi is seeing it as a second instance of the resource
The file for the secret is pretty standard across all components as so:
Copy code
apiVersion: v1
kind: Secret
metadata:
  {{- include "confluent-operator.labels" . }}
  namespace: {{ .Release.Namespace }}
  name: {{ .Values.name }}-apikeys
type: Opaque
data:
  apikeys.json : {{ include "confluent-operator.apikeys" . | b64enc }}
Again, this only happens when multiple secrets are being created during a single pulumi run. Not when they get created by themselves
2 Views