Hey folks. Anyone else suddenly unable to even `preview` their EKS resources? ``` [...] kuber...
r

Robert-Jan Huijsman

about 1 year ago
Hey folks. Anyone else suddenly unable to even
preview
their EKS resources?
[...]

    kubernetes:core/v1:Service istio-loadbalancer-hostname  warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://D01A49DEECB6E23C1A3454E1676679ED.gr7.us-east-2.eks.amazonaws.com/openapi/v2?timeout=32s>": getting credentials: exec: executable aws failed with exit code 255
    kubernetes:core/v1:Service istio-loadbalancer-hostname  error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"
    pulumi:pulumi:Stack resemble-clusters-aws-staging1 running error: preview failed
    kubernetes:core/v1:Service istio-loadbalancer-hostname  1 error; 1 warning
    pulumi:pulumi:Stack resemble-clusters-aws-staging1  1 error; 6 messages
Diagnostics:
  kubernetes:core/v1:Service (istio-loadbalancer-hostname):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "<https://D01A49DEECB6E23C1A3454E1676679ED.gr7.us-east-2.eks.amazonaws.com/openapi/v2?timeout=32s>": getting credentials: exec: executable aws failed with exit code 255
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster was deleted, you can remove this resource from Pulumi state by rerunning the operation with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to "true"

  [...]

  pulumi:pulumi:Stack ([...]):
    error: preview failed

    <botocore.awsrequest.AWSRequest object at 0x78fc7c333190>
    <botocore.awsrequest.AWSRequest object at 0x77d185d577d0>
It seems the
kubeconfig
produced by the EKS provider is incomplete!? That
kubeconfig
has the following (redacted):
[...]
        "exec": {
          "apiVersion": "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>",
          "args": [
            "eks",
            "get-token",
            "--cluster-name",
            "staging1",
            "--role",
            "arn:aws:iam::123456789012:role/Administrator"
          ],
          "command": "aws",
          "env": [
            {
              "name": "KUBERNETES_EXEC_INFO",
              "value": "{\"apiVersion\": \"<http://client.authentication.k8s.io/v1beta1\|client.authentication.k8s.io/v1beta1\>"}"
            }
          ]
        }
[...]
When I run that myself, I indeed get the reported error:
$ aws eks get-token --cluster-name staging1 --role arn:aws:iam::123456789012:role/Administrator

<botocore.awsrequest.AWSRequest object at 0x72e04d3b0a10>

$ echo $?
255
But if I add the missing
--region
flag to the command, then I can it just fine:
aws eks get-token --cluster-name staging1 --role arn:aws:iam::123456789012:role/Administrator --region=us-east-2
{
    "kind": "ExecCredential",
    "apiVersion": "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>",
    "spec": {},
    "status": {
        "expirationTimestamp": "2024-05-16T15:44:50Z",
        "token": "k8s-aws-v1.[...]"
    }
}
So why does Pulumi's EKS provider suddenly produce a
kubeconfig
that doesn't work!? And/or what happened to cause the
--region
flag to have gone missing?