https://pulumi.com logo
Title
c

cold-car-67614

05/05/2021, 6:55 PM
Hello! We use Rancher and I am listing the Rancher clusters in the Pulumi config file and looping over them to deploy a Helm chart. It looks like I am missing something as there are resource name collisions. I want the _release_name_ to be the same in both releases but simply applied to different clusters. I tried looking at the transformations stuff but I don't think I can update the resource_names with that. Any tips are appreciated!
b

billowy-army-68599

05/05/2021, 6:56 PM
can you share your code? 😄
c

cold-car-67614

05/05/2021, 6:57 PM
Uh sure. How much of it? It's a few files and a few hundred lines. The helm chart deploy looks like this:
helm.Chart(
            release_name=f"{resource_name}/consul",
            config=helm.ChartOpts(
                chart="consul",
                namespace=args.ns,
                values=get_values_or_promise("consul.yaml", tmpl_vars),
                fetch_opts=helm.FetchOpts(
                    repo="<https://helm.releases.hashicorp.com>",
                ),
                transformations=[_strip_status],
            ),
            opts=k8s_opts,
        )
I am building a
k8s_opts
for each cluster and calling the helm chart with a different opts to point to the right cluster.
I think the "{resource_name}/consul" isn't doing much. The generated resource names for a k8s resource looks like this:
+   │     └─ kubernetes:core/v1:Service                                               consul-system/consul-dns                                   create
And that happens for each cluster.
The values.yaml sets a
fullnameOverride
. This is set to
consul
, which is why it looks like this. Same namespace and same name for the k8s metadata.name field. Which is exactly what I want on multiple clusters.
The alternative is to make a unique stack per cluster, but I was hoping I could get the looping to work.
Building the k8s_opts looks like this:
cluster = rancher.get_cluster(name=args.name, opts=self.rancher_invoke)

        self.k8s_provider = k8s.Provider(
            resource_name=f"{resource_name}/k8s",
            kubeconfig=cluster.kube_config,
            opts=child_opts,
        )

        self.k8s_opts = pulumi.ResourceOptions(provider=self.k8s_provider)
b

billowy-army-68599

05/05/2021, 7:27 PM
where is your loop?
if you're deploying these resources to different clusters in the same stack, you'll need to give the resource name a unique identifier:
helm.Chart(
            release_name=f"{cluster_name}/{resource_name}/consul",
            config=helm.ChartOpts(
                chart="consul",
                namespace=args.ns,
                values=get_values_or_promise("consul.yaml", tmpl_vars),
                fetch_opts=helm.FetchOpts(
                    repo="<https://helm.releases.hashicorp.com>",
                ),
                transformations=[_strip_status],
            ),
            opts=k8s_opts,
        )
something like that
c

cold-car-67614

05/05/2021, 7:43 PM
Yeah that's what I am doing. The
resource_name
has the cluster name in it. The duplicate erroor is not on the helm chart resource. It's on the inner k8s children resources. It seems.
I am wondering if this could perhaps be a feature request? Maybe the Helm Chart resource should take an optional
resource_name_prefix
argument that gets prepended to all the resource names. That way if we deploy the exact same helm chart twice in 2 clusters we can avoid resource_name collisions?
Unless there is a way to do that as part of a transformation?
b

billowy-army-68599

05/05/2021, 8:16 PM
c

cold-car-67614

05/05/2021, 8:17 PM
Wow, blind. I shall give that a go but I think that's exactly what I am looking for.
👍 1
Hmmm I wonder if it's bugged or something. The prefix I give it is not showing up in the generated resources.
helm.Chart(
            release_name="consul",
            config=helm.ChartOpts(
                resource_prefix=resource_name,
                chart="consul",
                namespace=args.ns,
                values=get_values_or_promise("consul.yaml", tmpl_vars),
                fetch_opts=helm.FetchOpts(
                    repo="<https://helm.releases.hashicorp.com>",
                ),
                transformations=[_strip_status],
            ),
            opts=k8s_opts,
        )
resource_name
being the cluster name in my case it's
entsvcs-dev-lab
. Generated resource is this:
+   │     ├─ <kubernetes:rbac.authorization.k8s.io/v1:ClusterRole>                      consul-connect-injector-webhook                            create     1 error
The next iteration throws an error
<kubernetes:rbac.authorization.k8s.io/v1:ClusterRole> (consul-connect-injector-webhook):
    error: Duplicate resource URN 'urn:pulumi:lab::net-inf::rei:helm:Consul$<kubernetes:helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:ClusterRole::consul-connect-injector-webhook>'; try giving it a unique name
b

billowy-army-68599

05/05/2021, 8:25 PM
ahh it's the urn, not the resource name
c

cold-car-67614

05/05/2021, 8:26 PM
Oh, I was assuming the resource_name was the ending part of the urn.
It looks like someone else reported this but the ticket got closed without a resolution possibly? https://github.com/pulumi/pulumi-kubernetes/issues/616
b

billowy-army-68599

05/05/2021, 8:29 PM
give me a few...
c

cold-car-67614

05/05/2021, 8:29 PM
Oh no rush at all. I really appreciate your time. Thank you so much.
b

billowy-army-68599

05/05/2021, 11:17 PM
you need to remove
release_name
c

cold-car-67614

05/05/2021, 11:19 PM
Looks like pulumi v3.1 and provider v3.1.
b

billowy-army-68599

05/05/2021, 11:19 PM
yeah you need to remove
release_name
- that's the thing setting the URN to a hardcoded name
which is why you're getting the URN clash
c

cold-car-67614

05/05/2021, 11:20 PM
It looks like release_name is in position 1. You are also setting the release_name it looks like.
At least that's what I can tell.
I'll give it another shot to set the resource_prefix. If it works for you then it should work for me.
b

billowy-army-68599

05/05/2021, 11:22 PM
yah this is the result
c

cold-car-67614

05/05/2021, 11:23 PM
Ok awesome. That's what I was hoping for was what you see there.
In my values.yaml I was setting this:
#fullnameOverride: consul
It looks like that is causing it. In your side you could set it like:
values={
  "fullnameOverride": "consul",
}
That should reproduce it. I'll try to run it with this and see what happens. I was thinking the resource_prefix would only affect the Pulumi URN's and not the
metadata.name
field on the k8s side.
Yeah looking at the details in the preview that field is adding that prefix to the kubernetes manifest which isn't quite what I want.