question on authentication (users in kubeconfig) using automation api and k8 provider: trying to deploy a "deployment" (example nginx server) using the pulumi automation api and AWS EKS cluster. The config file (Portion with users:) is as listed below. The question is how does automation API get the credentials to run the aws command. We get the error "~ kubernetes
apps/v1Deployment id3 refreshing (5s) warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials".
i have tried adding env variable in the kubeconfig to pass secret and access key but doesn't seem to work. (the aws command cannot use the key and secret as an argument). If i set the aws configure profile explicitly in the container where api is running, it all works fine but we don't want to hardcode the creds in the container but want to pass the kubeconfig file dynamically.....
there are many example using cli but cant find any thing on how automation API works for this use case. Any pointers are appreciated.
users:
- name: arn
awseks
us west 1XXXXXXX:cluster/eks-clu01a
user:
exec:
apiVersion:
client.authentication.k8s.io/v1beta1
args:
- --region
- us-west-1
- eks
- --cluster-name
- eks-clu01a
command: aws