Anyone have any pointers for how to make the Pulum...
# pulumi-deployments
n
Anyone have any pointers for how to make the Pulumi Deployments OIDC auth work for talking to an EKS cluster?
I think the provider is trying to run
aws eks get-token ....
to get credentials, but I'm not positive - just know its failing with "the server has asked for the client to provide credentials"
w
Deployments OIDC is for accessing the cloud providers. The k8s provider talks directly to the k8s cluster and so you need to configure environment variables for the deployment for the default k8s provider to use or pass it in as part of the stack config and instantiate the k8s provider in your code. See https://www.pulumi.com/registry/packages/kubernetes/installation-configuration/#setup (This all assumes the stack being deployed does not manage the k8s cluster, nor is it being managed by another Pulumi stack from which you could get the kubeconfig via stack reference.)
n
@witty-candle-66007 in this case it actually does manage the k8s cluster and is grabbing the kubeconfig from the EKS resource. The EKS-generated kubeconfig looks like it is executing
aws eks get-token
, and as far as I can tell that should be using the AWS env vars injected by Deployments, but somewhere along the way something isn't working. I'm at a little bit of an impasse as far as how to debug it though, any pointers?
it works locally with
AWS_ACCESS_KEY_ID
,
AWS_SECRET_ACCESS_KEY
, and
AWS_SESSION_TOKEN
set (and no other means of authenticating to k8s that I can imagine), and as far as I can tell Deployments is able to auth to AWS just fine
w
Let me run a quick test on my end.
n
and it seems a little suspicious to me that its not eg printing some kind of error from that
get-token
invocation
thanks, appreciate any help I can get on this
w
It might be a bit longer than “quick” if only because it takes a minute to stand up an eks cluster, but I’ll let you know what I find out.
n
heh I hear that, appreciate it
w
I’m seeing the issue in my test as well. When you mention the
EKS-generated kubeconfig
can you clarify what you mean by that? How are you getting the kubeconfig in your code?
n
Copy code
cluster, err := eks.NewCluster(ctx, "platform-eks", &eks.ClusterArgs{
		// Omitted a bunch of junk here
	})
	if err != nil {
		return nil, err
	}

	eksProvider, err := kubernetes.NewProvider(ctx, "platform-eks-provider", &kubernetes.ProviderArgs{
		Kubeconfig: cluster.KubeconfigJson,
	})
	if err != nil {
		return err
	}
(those aren't actually side by side like that but you get the idea)
w
Yeah. And is that using the pulumi/eks package?
or the aws.eks resource directly?
n
the Pulumi one,
"<http://github.com/pulumi/pulumi-eks/sdk/go/eks|github.com/pulumi/pulumi-eks/sdk/go/eks>"
w
yep - that’s what I assumed - just wanted to make sure.
Let me look into this further.
n
appreciate it, let me know if there's anything I can do to assist
w
will do
Can you confirm what versions of the aws and eks packages you are using?
n
<http://github.com/pulumi/pulumi-eks/sdk|github.com/pulumi/pulumi-eks/sdk> v1.0.2
and
<http://github.com/pulumi/pulumi-aws/sdk/v5|github.com/pulumi/pulumi-aws/sdk/v5> v5.42.0
w
With code I have that reproduces your error message, using Pulumi Deployments to deploy the stack from scratch works - so it’s not an OIDC interaction. Instead, at least in my case, it’s due to different authentication methods between my command line
pulumi up
and the OIDC used in Pulumi deployments. In a nutshell, the kubeconfig that is generated on initial deployment on my command line is not able to be used by deployments since the credentials are different. I haven’t tested it yet, but leveraging the ProviderCredentialOpts property as per this github issue comment, https://github.com/pulumi/pulumi-eks/issues/669#issuecomment-1429190235property should work.