Hi! I have an issue with Kubernetes provider. Acco...
# general
g
Hi! I have an issue with Kubernetes provider. According to documentation (https://pulumi.io/quickstart/kubernetes/setup.html) it’s possible to use a provider that doesn’t use local
~/.kube
config but rather configuration exported for example from GKE cluster:
I have created a k8s provider in one stack with this code:
Copy code
function createKubernetesProvider(name: string, gkeCluster: gke.Cluster): k8s.Provider {
    const context = pulumi.
        all([gkeCluster.name, gkeCluster.project, gkeCluster.region, gkeCluster.zone]).
        apply(([name, project, region, zone]) => {
            const location = region || zone
            return `${project}_${location}_${name}`
        })

    const kubeconfig = pulumi.
        all([context, gkeCluster.endpoint, gkeCluster.masterAuth]).
        apply(([context, endpoint, auth]) => {
            return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${auth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
        });
    
        return new k8s.Provider(name, {
            context: context,
            kubeconfig: kubeconfig,
        })
}
And then later used it in another stack:
Copy code
const infraStack = new pulumi.StackReference('example-infra-dev')
const k8sProvider = infraStack.getProvider('k8sProvider')

const appName = 'example'
const appLabels = {app: appName, env: 'dev'}

const exampleConfig = new k8s.core.v1.ConfigMap(appName, {
    metadata: { labels: appLabels },
    data: { 'config.txt': fs.readFileSync('config.txt').toString()}
}, {
    provider: k8sProvider
})
But when I run
pulumi update
my config map is not created in the cluster configured in my k8s provider but instead in the default cluster selected in my
~/.kube
configuration
What am I doing incorrectly?
f
Hey @gifted-island-55702, I think you misunderstand what
Resource.getProvider
is used for. It gets the provider attached to the resource you call it on. (like the provider you pass to your
ConfigMap
in the
opts
parameter). Here you want to get something defined in another stack. Reading the doc of
StackReference
, you have
Copy code
Manages a reference to a Pulumi stack. The referenced stack's outputs are available via the
 `outputs` property or the `output` method.
Which means you have first to export the provider from the stack
example-infra-dev
in order to get it with
infraStack.outputs.k8sProvider
or
infraStack.getOutput('k8sProvider')
To make it available as an ouput, in your
example-infra-dev
stack you simply need to do something like that in your
index.ts
:
Copy code
export const k8sProvider = createKubernetesProvider('my-provider', my_cluster);
g
Oh, right!
What would be the object type returned the output - would it be an actual Kubernetes Provider object I can pass directly or would it be some kind of an ID that I need to use to construct actual Provider object @faint-motherboard-95438?
Copy code
const k8sProvider = infraStack.getOutput('k8sProvider') as pulumi.Output<k8s.Provider>
...
const appConfig = new k8s.core.v1.ConfigMap(appName, {
    metadata: { labels: appLabels },
    data: { 'config.txt': fs.readFileSync('config.txt').toString()}
}, {
    provider: k8sProvider
})
This won’t compile as I have incompatible types:
Output<k8s.Provider>
vs just plain
k8s.Provider
I guess I can call
k8sProvider.get()
but I am not sure when it would be safe to call
Output.get()
method
f
@gifted-island-55702 I never tried this use case myself so I’m not sure on what you will get here. The type defined is
Output<any>
but since you are actually referencing a value exported from another stack, I would guess it is already defined when you call it. I would try first
Copy code
const { k8sProvider } = infraStack.outputs
...
const appConfig = new k8s.core.v1.ConfigMap(appName, {
    metadata: { labels: appLabels },
    data: { 'config.txt': fs.readFileSync('config.txt').toString()}
}, {
    provider: k8sProvider
})
That should avoid compilation issue and maybe give you straight away what you need
w
This may work as written, but I'm not 100% positive you can export a Provider and then reference it from another stack. I believe the Provider needs to be created in the stack where it is used to ensure that that stack has a properly instantiated connection to the target resource provider (kubernetes cluster in this case). If it's possible here, I'd suggest exporting the raw data needed to connect to the cluster from the stack - like the
kubeconfig
constructed in
createKubernetesProvider
. cc also @microscopic-florist-22719 and @creamy-potato-29402 who have been working on reference architectures for multi-stack Kubernetes deployments and can suggest what patterns we've seen work well.
g
@white-balloon-205, @faint-motherboard-95438 yes, instead of exporting the provider, now I export kubeconfig only and create a k8s provider in the stack that references kubeconfig output from another stack - this works fine. Thank you!
c
that’s right
you have to export the kubeconfig, unfortunately
👍 1