Anyone run into issues with `pulumi preview` givin...
# kubernetes
a
Anyone run into issues with
pulumi preview
giving different behavior to
pulumi up
when using GKE? I'm finding that
preview
fails to properly use
gcloud
to authenticate with the cluster, causing the preview to not see any existing kubernetes resources.
up
on the other hand works as expected
c
if you have error messages I could probably help. without more info I’m guessing your kubeconfig is messed up though.
a
ah!, it appears I hit the issue only if I run
pulumi preview
with a
-r
flag
then I start to get
warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
messages
it's fine as a normal preview though
pulumi up -r
has the same errors
c
preview
doesn’t consult the cluster,
preview -r
does. does normal
pulumi up
work?
a
it appears not. calling it a night now but will dig more tomorrow. Everything is working fine when run on my local machine, but I'm hitting issues when running in a container as part of CI. It's possible it's a permissions issue with the service account that the CI system is using, but it seems to be working outside of pulumi
c
I’m very close to sure it’s you’re kubeconfig file.
I’m guessing it’s an auth problem but it’s been awhile since I’ve run into this sort of thing.
a
kubeconfig is the same locally and running within CI. the only difference is that rather than using my personal/local gcloud auth, it's using a service account and either not using gcloud properly, or the account has insufficient perms. however testing the service account outside pulumi it seems like it has sufficient perms
ie. the kubeconfig is actually generated by pulumi code:
Copy code
export function GenerateK8sProvider(name: string, cluster: gcp.container.Cluster): k8s.Provider {
  // prettier-ignore
  const kubeconfig = pulumi.
    all([ cluster.name, cluster.endpoint, cluster.masterAuth ]).
    apply(([ name, endpoint, masterAuth ]) => {
        const context = `${gcp.config.project}_${gcp.config.zone}_${name}`;
        return `apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${masterAuth.clusterCaCertificate}
    server: https://${endpoint}
  name: ${context}
contexts:
- context:
    cluster: ${context}
    user: ${context}
  name: ${context}
current-context: ${context}
kind: Config
preferences: {}
users:
- name: ${context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp
`;
    });
  return new k8s.Provider(name, { kubeconfig: kubeconfig });
}