I’m trying to set up an eks cluster using `@pulumi...
# aws
b
I’m trying to set up an eks cluster using
@pulumi/eks
and I consistently get these errors when it’s trying to install the nodeAccess config map and vpc-cni plugin:
Copy code
kubernetes:core/v1:ConfigMap (supaglue-production-nodeAccess):
    error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://31D666DA6474E04F6D9BF247B0C4A017.gr7.us-west-2.eks.amazonaws.com/openapi/v2?timeout=32s>": getting credentials: decoding stdout: couldn't get version/kind; json parse error: json: cannot unmarshal array into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }

  eks:index:VpcCni (supaglue-production-vpc-cni):
    error: Command failed: kubectl apply -f /var/folders/5p/wkr3ydl163jg9b0t7vf_7h080000gn/T/tmp-31472OyZMmul3goiI.tmp
    Unable to connect to the server: getting credentials: decoding stdout: couldn't get version/kind; json parse error: json: cannot unmarshal array into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
Is the kubeconfig that is generated incorrect? It seems like it may be? Has anyone seen this before?
This is the code I’m using:
Copy code
const cluster = new eks.Cluster(clusterName, {
  name: clusterName,
  vpcId,
  publicSubnetIds,
  privateSubnetIds,
  instanceType: 't4g.medium',
  desiredCapacity: 3,
  minSize: 3,
  maxSize: 10,
  deployDashboard: false,
  enabledClusterLogTypes: ['api', 'audit', 'authenticator', 'controllerManager', 'scheduler'],
  version: '1.24',
  createOidcProvider: true,
});
I see the same issue when I invoke with the default config as well:
Copy code
const cluster = new eks.Cluster(clusterName);
Figured it out. My
~/.aws/config
file had
output=yaml_stream
, while this provider assumes
json
. It probably should set the env var
AWS_DEFAULT_OUTPUT=json
when invoking
aws eks get-token
Or use the flag
--output json
I’ll create a PR to fix
w
Good catch! A PR (or even just issue) would be great. 🙌