And just in case anyone knows the answer to the fo...
# aws
p
And just in case anyone knows the answer to the following issue: I’m trying to make a simple EKS cluster via pulumi.  My code and package.json: https://gist.github.com/roderik/473ec5c381526e0a668894d7dcef6459 It always error with the following error (cluster itself is deployed)
Copy code
pulumi-nodejs:dynamic:Resource (newcluster-vpc-cni):
   error: Command failed: kubectl apply -f /var/folders/93/trfs1ns93nx39y22gbwx6hmr0000gn/T/tmp-13508VziZRSVp56CV.tmp
  error: You must be logged in to the server (the server has asked for the client to provide credentials)
Full error log and the yaml it tries to deploy and my computers kubeconfig: https://gist.github.com/roderik/1a969b10c4365841ab72e79b51152b9b As far as I can tell, it means my kubeconfig cannot connect to the cluster correctly. I have no aws config file on my computer and have exported AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (working credentials, we deploy and interact with clusters with these credentials constantly in the CLI and code I’m trying to replace by pulumi. Which I assume is because it did not authenticate correctly using aws-iam-authenticator?
g
The
@pulumi/eks
provider will construct its own
kubeconfig
but allows for overrides via environment variable. It looks like you have a
kubeconfig
file already in your home directory. I suspect this is conflicting with the
kubeconfig
that is being set for the EKS cluster itself. Can you try removing that file and
KUBECONFIG
environment variable if you have it set and try again?
(Copied my response in the support ticket here to help others if they have questions on this too.)