Apologies if I'm doing something really dumb here,...
# typescript
b
Apologies if I'm doing something really dumb here, but I'm trying to create a
k8s.Provider
with a
kubeconfig
that is an
Output<string>
(from a kubernetes cluster dynamic resource). The
k8s.Provider
seems to accept a type
Input<string>
for
kubeconfig
but when I then try to use this provider with a
k8s.yaml.ConfigFile
, I get the error:
error: TypeError: Cannot read property 'map' of undefined
from
@pulumi\yaml\yaml.ts:2993:14
. The problem goes away if I use a
string
for
kubeconfig
instead, but I can't do that in this case because the cluster isn't created yet. It looks like I could work around this by putting this all in an
.apply
, but then I'm allocating resources in an apply which seems like a bad idea? Is this possible to do/am I doing something wrong?
l
Properties of type
Input<string>
can take `Output<string>`s. Is it not working if you just the kubeconfig directly?
b
The provider seemingly creates OK (no typescript errors or anything) but then I get that
TypeError: Cannot read property 'map' of undefined
when I try to use the provider
l
How are you using it?
b
Copy code
new k8s.yaml.ConfigFile("k8s-kubernetes-dashboard-yaml", { file: `${__dirname}/yaml/kubernetes-dashboard.yaml` }, { provider });
That does seem to work if the kubeconfig is a
string
rather than an
Output<string>
, even though it looks like an
Output<string>
should be accepted?
l
Yes, it should be. It sounds like you're doing everything right. How are you constructing the provider? Look for things which might be misusing the Output as a string... maybe you're concatenating the Output with a string?
Though the error message implies that you're dereferencing an undefined value, which doesn't seem to match what you're describing.
b
Copy code
export const provider = new k8s.Provider("production", {
  cluster: productionCluster.clusterName,
  kubeconfig,
});
Both
productionCluster.clusterName
and
kubeconfig
are of type
Output<string>
, but the
cluster
doesn't seem to be the issue
If I remove that or change it, it doesn't affect the error. It seems to just be
kubeconfig
that causes it
l
Have you confirmed that the value inside the kubeconfig output is in the correct format? It's supposed to be a yaml string, right?
b
I also can't figure out what
node_modules\@pulumi\yaml\yaml.ts:2993:14
is referring to? There doesn't seem to be a file in node_modules by that name, so I can't look at that code
It's a path to a kubeconfig, which seems to be allowed by the docs:
The contents of a kubeconfig file or the path to a kubeconfig file.
Although I'm not sure if it matters at that point, since the output isn't resolved yet I think?
Since the cluster it comes from isn't up yet, so it doesn't actually have a concrete value
I'm pretty new to Pulumi in general, so it's possible I'm misunderstanding something here
l
It all looks good to me. I've been hunting through the Pulumi examples but I can't find one that create a k8s Provider...
b
Yeah, it's possible I'm doing something odd or off the beaten path here. I don't know if there's a better way to accomplish this, but effectively I'm trying to make a kubernetes cluster, and then immediately start populating it with stuff.
So I want subsequent kubernetes operations to go to that cluster rather than whatever might be in my current kubeconfig
l
That fits my idea of what should happen. Maybe someone over in #kubernetes would be able to help quicker?
b
I'll cross post there, thanks!
h
Did you ever find a solution to this? I’m getting the same error. I find a couple examples of what I want to do here. It seems like setting
{ provider: cluster.provider }
should make it work, but I’m getting the
'map' of undefined
error too.
b
Sadly I did not. Instead I ended up doing something really ugly and hacky to get around it.
😢 1
h
When I comment out the
ConfigFIle
line and do
pulumi up
, the cluster comes up successfully. Then if I include the
ConfigFile
line again, that manifest successfully deploys. They just don’t like to deploy from 0 to both cluster and manifest deployed on the same
pulumi up
. Do you see the same behavior?
b
Yep, that's exactly what it was doing for me.
h
What was your hacky solution, if you don’t mind me asking?
b
I ended up making the Output into a
Promise<k8s.Provider>
that read the private field
isKnown
off of the output. And then where I wanted to use the Provider, I did it with a
.then
.
b
b
Copy code
const provider = new Promise<k8s.Provider>(res => {
    (kubeconfig as unknown as { isKnown: Promise<boolean> }).isKnown.then(known => {
      if (known) {
        res(kubeProvider);
      }
      console.log("Kubeconfig is not known, so skipping kubernetes resource preview may be inaccurate!.");
      // We don't call rej here because this is not an error we want to fail on. Instead, things waiting for this provider will hang and not be created.
    });
  });
And then when I wanted to use it, I did:
Copy code
provider.then(provider => {
  new k8s.core.v1.Namespace("some-namespace", { provider });
});
b
yeah this appears to be a bug 😞
b
This solution is super ugly because it makes the preview if the cluster is not actually created wrong.
h
Interesting… Ok, thanks for sharing
b
However, it does create the whole thing in one go after a pulumi up. It creates the cluster, then it notices there are more resources, and creates those too. I decided that was more important than the preview always being accurate, and consider the case where there's no cluster to be rare.
No problem. Would definitely like to have a fix at some point so I can get rid of this pattern, but it's been working OK for me so far.
h
Wait, your problem is creeating a namespace? Because I’m able to do that successfully without getting an error
Copy code
const namespace = new k8s.core.v1.Namespace(`${ns}-ns`,
  {
    metadata: {
      name: ns,
      labels: {
        '<http://app.kubernetes.io/name|app.kubernetes.io/name>': 'aws-load-balancer-controller',
      }
    }
  },
  {
    provider: cluster.provider,
    parent: cluster.provider
  });
b
I just used that as an example. I wrapped all usage of the provider in the promise because I didn't want to try and figure out which kubernetes resources strictly required it.
h
Ahh, ok
b
I think it was actually a
k8s.yaml.ConfigFile
that was initially causing me issues.
h
Yeah, that’s my problem too
b
But it didn't seem worth trying to figure out which was which, especially because there are a bunch of chained dependencies there.
h
yeah, makes sense
where does
kubeProvider
come from in your workaround?
b
Ah, sorry, left that part out.
Right above the provider, it's just this:
Copy code
const kubeProvider = new k8s.Provider("production", {
    cluster: productionKubernetes.clusterName,
    kubeconfig,
  });
Where
kubeconfig
is an
pulumi.Output<string>
🙏 1
I make the kubeProvider in the standard way, but only 'release it' when I know it's underlying Output actually has a value
So everything that depends on it just doesn't run if it's not ready yet
h
Thanks for sharing your workaround. Looks like that will work for me!