proud-pizza-80589
09/26/2020, 8:20 PMpulumi-nodejs:dynamic:Resource (newcluster-vpc-cni):
error: Command failed: kubectl apply -f /var/folders/93/trfs1ns93nx39y22gbwx6hmr0000gn/T/tmp-13508VziZRSVp56CV.tmp
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Full error log and the yaml it tries to deploy and my computers kubeconfig: https://gist.github.com/roderik/1a969b10c4365841ab72e79b51152b9b
As far as I can tell, it means my kubeconfig cannot connect to the cluster correctly.
I have no aws config file on my computer and have exported AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (working credentials, we deploy and interact with clusters with these credentials constantly in the CLI and code I’m trying to replace by pulumi.
Which I assume is because it did not authenticate correctly using aws-iam-authenticator?gentle-diamond-70147
09/27/2020, 8:10 PM@pulumi/eks
provider will construct its own kubeconfig
but allows for overrides via environment variable. It looks like you have a kubeconfig
file already in your home directory. I suspect this is conflicting with the kubeconfig
that is being set for the EKS cluster itself. Can you try removing that file and KUBECONFIG
environment variable if you have it set and try again?