This message was deleted.
s
This message was deleted.
b
ultimately, you need to be authenticated to AWS to be able to speak to the AWS control plane. You can authenticate outside the kubeconfig with environment variables (like an access key and secret key) if you prefer
h
i understand that but how do we do this in automation API ? we don't want to hardcode the creds in the container....hope that makes sense..
b
automation API isn’t the problem here. You need to get creds into the container somehow..
h
may be i am not able to explain the need. We are able to define creds using AWS profiles, using get token etc BUT they are available container wide. (which we don't want). anyways , will do more research.
thanks
FYI for the solution - as this is not clearly documented anywhere, it took a while for us to try and make this work with Pulumi Automation API - The credentials (key/secrets) can be passed using the kubeconfig as listed below using the env block and the command uses the env parameters. (As AWS CLI doesnt allow to pass the access/secret directly on the CLI). we will now try to make this work using sts tokens and certificates next and avoid passing creds this way.---- users: - name: arnawseksus west 1xxxxx:cluster/eks-clu01a user: exec: apiVersion: client.authentication.k8s.io/v1beta1 args: - --region - us-west-1 - eks - get-token - --cluster-name - eks-clu01a command: aws env: - name: "AWS_ACCESS_KEY_ID" value: "XXXXXXXX" - name: "AWS_SECRET_ACCESS_KEY" value: "XXXXXXXXXXXXXX"
c
Hi! I have the same problem. How did you manage to solve it without using aws keys?
572 Views