This message was deleted.
# kubernetes
s
This message was deleted.
b
@white-chef-55657 that's coming from the
kubectl
configuration that gets created. When an EKS cluster is created, it also creates a kubeconfig and a provider. I would export the
kubeconfig
from the EKS cluster and verify your AWS credentials have adequate access. The
roleMappings
are usually to blame
w
ah.. I had the key/secret configured in the stack config, but I realize now that kubectl won’t be using these.. I must also set the env vars for the key/secret - which makes the config vars redundant
you think it’s worth having the eks provider add the aws key/secret from the stack config to the environment? I don’t mind opening a PR with that
b
that's not really a practice we see all that often tbh. Most people configure the provider using AWS profiles or externally to Pulumi, or use IAM roles
w
right, so configuring AWS profiles is essentially the same as setting the env vars this means I cannot rely solely on the pulumi stack config for credentials
the benefit of the pulumi stack config is the encrypted secrets, where as with env vars I don’t have that
b
understood, would be best to file an issue with your use case I think
w
will do, thanks
125 Views