I’m having some challenges in understanding `Provi...
# kubernetes
b
I’m having some challenges in understanding
ProviderCredentialOpts
. For example, I have a Pulumi stack in which I set (via config) the AWS access token and secret, as it is in a different account than the one my AWS CLI (i.e.
~/.aws
) is configured with. I then create a cluster like this:
Copy code
cluster, err := eks.NewCluster(ctx, "my-cluster", &eks.ClusterArgs{
		VpcId:            vpc.ID(),
		PublicSubnetIds:  pulumi.ToStringArrayOutput(publicSubnetIDs),
		PrivateSubnetIds: pulumi.ToStringArrayOutput(privateSubnetIDs),
		EnabledClusterLogTypes: pulumi.StringArray{
			pulumi.String("api"),
			pulumi.String("audit"),
			pulumi.String("authenticator"),
		},
		SkipDefaultNodeGroup: pulumi.BoolPtr(true),
		InstanceRoles: iam.RoleArray{
			// role0,
			// role1,
			role2,
		},
		NodeAssociatePublicIpAddress: pulumi.Bool(false),
		Version:                      pulumi.String("1.20"),
		UseDefaultVpcCni:             pulumi.Bool(true),
	})
	if err != nil {
		return err
	}
This works fine, but I get an error at the end basically saying that it could not connect to the cluster (I believe to do the CNI or other settings) as it could not authenticate. In order to enable authentication, I had to: 1. Create a new profile in my
~/.aws
folder that had the credentials set for this new account (I called it
ssa
) 2. Add the following to the above cluster create:
Copy code
ProviderCredentialOpts: eks.KubeconfigOptionsArgs{
			ProfileName: pulumi.String("ssa"),
		},
Now it could connect properly/errors were gone. However, I am not quite sure I am following why this is necessary, and the docs/examples are a bit sparse. Specifically, I am a bit concerned that I have to specify a specific profile to use (one that I have to configure out of band on whatever machine is running Pulumi), which doesn’t seem easily repeatable. Given Pulumi already has the AWS credentials to use to authenticate to create the cluster, why can’t it use those when talking to Kubernetes proper?
b
This is a side effect of the way EKS and the AWS CLI works. We need a valid KubeConfig to talk to the EKS control plane to set some options like the CNI (as you mentioned) We use kubectl for this, there's really not other way to do it because these configurations are preinstalled on EKS clusters: https://github.com/pulumi/pulumi-eks/blob/4f4a75b17de98cf2f9c3d34a960b59503cbc4f0a/nodejs/eks/cmd/provider/cni.ts#L158 If you look at the output of the EKS kubeconfig that is returned (that the kubeconfig uses), it uses
aws eks get-token
to talk to the control plane. The only option this command takes is
--profile
and
--role
options. It's not ever really touched by pulumi You can see this here: https://github.com/pulumi/pulumi-eks/blob/c0d357bdf3f283006f8b0a6cd4bc2f1c09df34c0/nodejs/eks/cluster.ts#L182 The short version is: it's a limitation of EKS and unfortunately, there's not much we can do 😞
To clarify this part:
Given Pulumi already has the AWS credentials to use to authenticate to create the cluster, why can’t it use those when talking to Kubernetes proper
The provider itself uses aws the Go SDK, which is way more configurable than the kubeconfig/`aws eks get-token`
b
@billowy-army-68599 thanks for clarifying. We’ve done something similar to this in the past to authenticate programmatically: https://vnt-software.com/accessing-an-amazon-eks-kubernetes-cluster/
It is possible to essentially construct the authentication token needed to be used for authentication programmatically, and then use it.
b
I suppose it's possible, but you'd need to build your own provider to do it, then grab the config values from your provider to pass to the auth request
b
I was hoping that the official provider could handle this, so it wasn’t dependent on this side-channel state being set up (and then set up uniformly on every machine where it is going to be executed). 🙂
Thanks for clarifying the behavior @billowy-army-68599
b
I'm trying to figure out how we'd get that to work cleanly, because we'd have to put the token in the generated Kubeconfig and refresh it regularly, the
eks get-token
mechanism handles the refresh too
b
It could be changed to call something like
pulumi eks get-token <stackname>
(just like it invokes
aws eks get-token
), which internally would use this pattern (i.e. get the credentials from the stack config, generate the pre-signed token and return it)
b
I don't think it's likely we'll implement something so specific to AWS EKS to work around an upstream product deficiency I'm afraid 😞 i'd recommend opening an issue with the EKS team to try and get them to implement more options for auth in the meantime
b
EKS has also added support for custom OIDC gateways (e.g. if you wanted to use GSuite or Okta or something else). Let’s say it was configured with that - would that help? It seems that the Pulumi EKS provider is quite specific in how it expects to communicate with the upstream provider, and we would suffer from the same challenges if it just tried to use the out of the box kubeconfig as it does today?
b
pulumi EKS is using the same mechanisms that you'd use to auth to a cluster as anything else, it's not doing anything specific. you can always build your own kubeconfig if needed
b
Let me think about it some more and see if I have some better questions, but appreciate the responses so far!