https://pulumi.com logo
#kubernetes
Title
# kubernetes
h

hallowed-horse-57635

11/11/2022, 1:28 AM
question on authentication (users in kubeconfig) using automation api and k8 provider: trying to deploy a "deployment" (example nginx server) using the pulumi automation api and AWS EKS cluster. The config file (Portion with users:) is as listed below. The question is how does automation API get the credentials to run the aws command. We get the error "~ kubernetesapps/v1Deployment id3 refreshing (5s) warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials". i have tried adding env variable in the kubeconfig to pass secret and access key but doesn't seem to work. (the aws command cannot use the key and secret as an argument). If i set the aws configure profile explicitly in the container where api is running, it all works fine but we don't want to hardcode the creds in the container but want to pass the kubeconfig file dynamically..... there are many example using cli but cant find any thing on how automation API works for this use case. Any pointers are appreciated. users: - name: arnawseksus west 1XXXXXXX:cluster/eks-clu01a user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - us-west-1 - eks - --cluster-name - eks-clu01a command: aws
b

billowy-army-68599

11/11/2022, 4:15 PM
ultimately, you need to be authenticated to AWS to be able to speak to the AWS control plane. You can authenticate outside the kubeconfig with environment variables (like an access key and secret key) if you prefer
h

hallowed-horse-57635

11/11/2022, 6:08 PM
i understand that but how do we do this in automation API ? we don't want to hardcode the creds in the container....hope that makes sense..
b

billowy-army-68599

11/11/2022, 6:22 PM
automation API isn’t the problem here. You need to get creds into the container somehow..
h

hallowed-horse-57635

11/11/2022, 6:28 PM
may be i am not able to explain the need. We are able to define creds using AWS profiles, using get token etc BUT they are available container wide. (which we don't want). anyways , will do more research.
thanks
FYI for the solution - as this is not clearly documented anywhere, it took a while for us to try and make this work with Pulumi Automation API - The credentials (key/secrets) can be passed using the kubeconfig as listed below using the env block and the command uses the env parameters. (As AWS CLI doesnt allow to pass the access/secret directly on the CLI). we will now try to make this work using sts tokens and certificates next and avoid passing creds this way.---- users: - name: arnawseksus west 1xxxxx:cluster/eks-clu01a user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - us-west-1 - eks - get-token - --cluster-name - eks-clu01a command: aws env: - name: "AWS_ACCESS_KEY_ID" value: "XXXXXXXX" - name: "AWS_SECRET_ACCESS_KEY" value: "XXXXXXXXXXXXXX"
312 Views