Is there some way to utilize the Pulumi Deployment...
# kubernetes
n
Is there some way to utilize the Pulumi Deployments OIDC auth mechanism for auth to an EKS cluster? I sort of expected it to just work since I'm able to auth to AWS, my best guess is Kubernetes provider is trying to use the `aws`cli to get an auth token and maybe the AWS CLI isn't actually getting credentials injected? Ultimately just getting:
Copy code
pulumi:providers:kubernetes$kubernetes:core/v1:Namespace$kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (platform-eks-datadog-operator)
    error: could not get server version from Kubernetes: the server has asked for the client to provide credentials
f
I don't know about AWS, but I use GKE. With GKE, there is a GKE-specific
kubeconfig
that tells k8s how to authenticate (using the gauth plugin). You can pass this kubeconfig to the Kubernetes Provider resource. I imagine AWS does it similarly, so you should be able to construct an AWS-specific kubeconfig, then pass it along to the provider.
s
The “default” Kubeconfig for an EKS cluster uses a binary called
aws-iam-authenticator
that leverages your AWS creds in the same way the AWS CLI would. If it isn’t working with Deployments, then I would a) wonder if creds are actually getting injected, or b) if the Deployments image contains the necessary binary. Let me inquiry about that second item internally and see if I can get any information.
n
yeah, my EKS-generated kubeconfig shows (in part):
Copy code
users          : [ 
    [0]: { 
        name: "aws" 
        user: { 
            exec: { 
                apiVersion: "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>" 
                args      : [ 
                    [0]: "eks" 
                    [1]: "get-token" 
                    [2]: "--cluster-name" 
                    [3]: "platform-eks" 
                ] 
                command   : "aws" 
                env       : [ 
                    [0]: { 
                        name : "KUBERNETES_EXEC_INFO" 
                        value: (json) { 
                            apiVersion: "<http://client.authentication.k8s.io/v1beta1|client.authentication.k8s.io/v1beta1>" 
                        }
                    } 
                ] 
            } 
        } 
    }
]
I would sort of expect that to cause some other more immediate error if that command failed in some fashion. My understanding of Deployments is that it is injecting
AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
, and
AWS_SESSION_TOKEN
which seems like it would be enough to make this work so.... definitely confused
s
Ah, so this Kubeconfig replaces
aws-iam-authenticator
with the
aws eks get-token
command, which I presume has the same basic effect. In that regard, I agree---I would expect that injecting the AWS credentials as you described should just work.