for some reason if I implement a class for an app,...
# google-cloud
c
for some reason if I implement a class for an app, the app will deploy fine. But if I try to make a class that houses several apps together, the provider fails with a message saying it failed to parse, even though it looks exactly the same in console.log
w
Can you share the error message? I don’t follow exactly what you are describing - but yes you should definitely be able to pass a Kube provider to multiple resources and/or components.
c
Copy code
error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
This is the error I see. I have a class that houses all of our kube object for an app, like a helm chart. If I make an instance of that class in index.ts and pass the provider it will work correctly.
However, If I make a class called environments that houses instances of all of our apps; pass the provider to the environments class with the environments class passing the same provider to the apps classes then I get the above error.
it feels like I can only directly pass the provider once, but can't pass it anywhere else.
w
That definitely should not be the case. Any chance you can share a repro of this? How are you populating the
kubeconfig
setting on the Kubernetes provider? Also cc @gorgeous-egg-16927
c
let me mock up a sample for you
one of my coworkers updated to the latest version that just came out today for mac (?) and the error seems to have gone away he had moved everything to index.ts
w
Interesting - I'm not aware of any changes we've made that would affect this. If you do see this again would love to know.
c
I will do some more testing and and share my code if I get it again.
This is how I setup the code
then I would instantiate environments.ts
( for reference in case I retrigger the issue )
Copy code
Do you want to perform this update? yes
Updating (labs):
     Type                                                        Name           Status                  Info
     pulumi:pulumi:Stack                                         infra-labs     **failed**              1 error
 +   ├─ kubernetes:peerfit:Datadog                               datadog-test   created                 
 +   ├─ kubernetes:pf:Environments                               app            created                 
 +   ├─ kubernetes:peerfit:Mysql                                 mysql-test     created                 
 +   ├─ kubernetes:core:ServiceAccount                           datadog-agent  **creating failed**     1 error
 +   ├─ kubernetes:core:Secret                                   mysql          **creating failed**     1 error
 +   ├─ kubernetes:core:Service                                  mysql          **creating failed**     1 error
 +   ├─ kubernetes:core:PersistentVolumeClaim                    mysql-pvc      **creating failed**     1 error
 +   ├─ kubernetes:core:Service                                  datadog-agent  **creating failed**     1 error
 +   ├─ kubernetes:<http://rbac.authorization.k8s.io:ClusterRoleBinding|rbac.authorization.k8s.io:ClusterRoleBinding>  datadog-agent  **creating failed**     1 error
 +   ├─ kubernetes:<http://rbac.authorization.k8s.io:ClusterRole|rbac.authorization.k8s.io:ClusterRole>         datadog-agent  **creating failed**     1 error
 +   ├─ kubernetes:apps:Deployment                               mysql          **creating failed**     1 error
 +   └─ kubernetes:apps:DaemonSet                                datadog-agent  **creating failed**     1 error
 
Diagnostics:
  kubernetes:core:Service (mysql):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:core:PersistentVolumeClaim (mysql-pvc):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:apps:DaemonSet (datadog-agent):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  pulumi:pulumi:Stack (infra-labs):
    error: update failed
 
  kubernetes:core:Secret (mysql):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:core:Service (datadog-agent):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:<http://rbac.authorization.k8s.io:ClusterRoleBinding|rbac.authorization.k8s.io:ClusterRoleBinding> (datadog-agent):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:apps:Deployment (mysql):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:core:ServiceAccount (datadog-agent):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
  kubernetes:<http://rbac.authorization.k8s.io:ClusterRole|rbac.authorization.k8s.io:ClusterRole> (datadog-agent):
    error: configured Kubernetes cluster is unreachable: failed to parse kubeconfig data in `kubernetes:config:kubeconfig`; this must be a YAML literal string and not a filename or path - yaml: line 11: could not find expected ':'
 
Outputs:
  - clusterName: "labs"

Resources:
    + 3 created
    12 unchanged

Duration: 9s
got it again
Copy code
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import * as network from "./libraries/gcp/network";
import * as gke from "./libraries/gcp/gke";
import * as lbalance from "./libraries/gcp/httplb"

const config = new pulumi.Config();
const region = config.require("region");
const projectName = config.require("project");
const name = config.require("name")
const cidr = config.require("cidr")
const version = config.require("version")

const gcpProvider = new gcp.Provider(name, {
  region: region,
  project: projectName
})

const networks = new network.Networks(name, {
  name: name,
  provider: gcpProvider,
  project: projectName,
  region: region,
  cidr: cidr
}, {})

const cluster = new gke.Cluster(name, {
  name: name,
  provider: gcpProvider,
  project: projectName,
  region: region,
  network: networks.network.name,
  subnetwork: networks.subnetwork.name,
  version: version
}, {})

const lb = new lbalance.HTTPLB(name, projectName, region, networks.network, {})

const kubeconfig = () => {
    const context = `${gcp.config.project}_${gcp.config.zone}_${cluster.gke.name}`;
    return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${cluster.gke.masterAuth.clusterCaCertificate}
    server: https://${cluster.gke.endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
  };

// Create a Kubernetes provider instance that uses our cluster from above.
const clusterProvider = new k8s.Provider(name, {
  kubeconfig: kubeconfig(),
});

export const clusterName = cluster.gke.name
this is my index.ts
the github account repo has the kubernetes library files
but if I instead move datadog from the environments.ts into the index.ts I am able to deploy datadog without issue
sorry for the back and forth.
w
So - the way you are building
kubeconfig
does look suspicious.
cluster.gke.name
and
cluster.gke.endpoint
and
cluster.gke.name
are Outputs, so you can't just embed them ins strings like this. They will
toString
to some text that includes a warning. You can likely see this yourself if you are a
console.log
of the value returned from
kubeconfig()
before returning it. You can fix by changing that function to:
Copy code
const kubeconfig =
    pulumi.all([cluster.gke.name, cluster.gke.masterAuth.clusterCaCertificate, cluster.gke.endpoint])
          .apply(([clusterName, clusterCaCertificate, clusterEndpoint]) => {
    const context = `${gcp.config.project}_${gcp.config.zone}_${clusterName}`;
    return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${clusterCaCertificate}
    server: https://${clusterEndpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
  });
(And then using just
kubeconfig
instead of
kubeconfig()
further down)
c
I will give that a try