(second try) I'm trying to deploy a chart to a clu...
# general
h
(second try) I'm trying to deploy a chart to a cluster I created in the same stack. pulumi doesn't use the provider or at least I would expect that telling
k8s.helm.v3.Chart
about the provider being the one I create it uses the kubeconfig from that provider. Is that way off?
Copy code
import * as k8s from "@pulumi/kubernetes";

const k8sProvider = new k8s.Provider('k8s-cluster',
    {
        cluster: pulumi.interpolate`${cluster}`,
        // kubeconfig: cluster.kubeconfig,
    },
    {
        parent: cluster
    })
const karpenterNamespace = new k8s.core.v1.Namespace('karpenter', {},
    {
        parent: k8sProvider
    })

const nginxIngress = new k8s.helm.v3.Chart("nginx-ingress",
    {
        path: "./vendor/karpenter/charts/karpenter",

        namespace: pulumi.interpolate`${karpenterNamespace}`,
        values: {
            controller: {
                env: [
                    {
                        name: "AWS_REGION",
                        value: "eu-west-1",
                    }
                ]
            }
        }
    },
    {
        provider: k8sProvider,
        parent: karpenterNamespace
    });
But this bails out + there are a million other resource definitions from the chart that complain, rightly so -- my local kubeconfig doesn't even know about the cluster and I don't see a reason why it should:
Copy code
Diagnostics:
  pulumi:pulumi:Stack (pulumi-eks-karpenter-dev):
    error: update failed
    error: Running program 'C:\Users\marti\src\pulumi-eks-karpenter' failed with an unhandled exception:
    Error: invocation of aws:ssm/getParameter:getParameter returned an error: error reading from server: read tcp 127.0.0.1:55120->127.0.0.1:55117: use of closed network connection
        at Object.callback (C:\Users\marti\src\pulumi-eks-karpenter\node_modules\@pulumi\runtime\invoke.ts:159:33)
        at Object.onReceiveStatus (C:\Users\marti\src\pulumi-eks-karpenter\node_modules\@grpc\grpc-js\src\client.ts:338:26)
        at Object.onReceiveStatus (C:\Users\marti\src\pulumi-eks-karpenter\node_modules\@grpc\grpc-js\src\client-interceptors.ts:426:34)
        at Object.onReceiveStatus (C:\Users\marti\src\pulumi-eks-karpenter\node_modules\@grpc\grpc-js\src\client-interceptors.ts:389:48)
        at C:\Users\marti\src\pulumi-eks-karpenter\node_modules\@grpc\grpc-js\src\call-stream.ts:276:24
        at processTicksAndRejections (node:internal/process/task_queues:78:11)

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> (nginx-ingress-karpenter-admin):
    error: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file. Make sure you have:

         ΓÇó set up the provider as per <https://www.pulumi.com/registry/packages/kubernetes/installation-configuration/>

     cluster "[object Object]" does not exist
e
I would expect that telling
k8s.helm.v3.Chart
about the provider being the one I create it uses the kubeconfig from that provider
I would expect that to... but your kubeconfig line is commented out?
h
Aha! Apparantly there's a massive difference between making the kubeconfig known and making the cluster known in the provider. Yeah, I mean if I tell it what the cluster is then I'd expect "the obvious" ... imho that means any k8s stuff that uses this provider uses the kubeconfig from that provider without being explicit about it
Well, now it's back-off restarting container. Can I please have the monoliths back? They were working, at least 🙂
e
Yeh I'm not sure what setting just "cluster" would do, and I don't think I know k8s well enough to comment on if that would be the obvious answer or not. Feel free to open an issue about it on https://github.com/pulumi/pulumi-kubernetes/issues, the experts might have a comment to make.