I'm trying to build out an EKS project in go, and ...
# kubernetes
f
I'm trying to build out an EKS project in go, and I'm having some issues with credentials and providers. Wondering if anyone has tried this particular scenario yet. We have a Hashicorp Vault AWS credential backend already configured for this account. SO - I have pulumi grabbing some AWS credentials from Vault, building an AWS provider, and then using that for all of my AWS resources. This works well for most things (VPC created already, etc.) But, when I use pulumi-eks to build an EKS cluster with this provider, it fails to validate that the cluster is running. From the error messages, it appear that the pulumi operator (which is running this code), is trying to use its own credentials to access the cluster afterwards. SO, I think there's some issue with the Kubernetes provider that gets generated not using the AWS provider credentials that I used to create the cluster. At this point, I have no clue how to get around this, other than to not use the pulumi-eks library, and instead build out the cluster and node groups, etc. myself using the "standard" aws library. This would allow me to control the K8s provider that gets generated and the aws-auth configmap myself (I think) Am I missing something, or is that the correct path forward here? Also, let me know if I should post this in AWS or Go instead
o
So I make sure I understand, you're: ā€¢ Using Kubernetes Operator to run pulumi stacks in an existing cluster, using a gitops workflow ā€¢ Using Vault secrets configuration with the operator and finding this works for other deployments, that is you're finding secrets configured in
Pulumi.$stackName.yaml
are working great ā€¢ You're then deploying a cluster with the
eks
provider ā€¢ And you're deploying resource to that second cluster from the first via the operator
Or maybe I have that 3rd/4th step wrong, is the
eks
provider itself reporting an error?
f
Yep, 3 should just be
eks provider gives error
not trying to deploy yet
Looking at the option
ProviderCredentialOpts
which seems to control the way the generated kubeconfig does authentication. Unfortunately, it doesn't allow me to pass different aws credentials, just a profile name or a role arn. I tried the role arn, but that seems to be where the problem exists - it's trying to assume that role arn with the credentials that the pod has (or whatever is running Pulumi), rather than the credentials passed in via the aws provider I generated. https://pkg.go.dev/github.com/pulumi/pulumi-eks/sdk@v0.40.0/go/eks#KubeconfigOptionsArgs This further leads me to believe that I need to go the more "manual" approach, not using the eks provider, but the aws/eks provider instead.
Hmmm, actually, thinking through that, I have no clue how to do that. Can I generate a kubernetes provider that uses aws credentials that I retrieve from Vault? Or do I need to create a local aws config on the fly that has profiles? I have gotten myself into quite the auth pickle šŸ˜„
I'm going to rethink how I do credentials in general here. I'll report back if I come up with something.
o
What option(s) are you passing to the EKS cluster?
f
Copy code
cluster, err := eks.NewCluster(ctx, awsCluster.Name, &eks.ClusterArgs{
		Version:          pulumi.String("1.21"),
		Name:             pulumi.String(awsCluster.Name),
		PublicSubnetIds:  pulumi.ToStringArrayOutput(publicSubnetIds),
		PrivateSubnetIds: pulumi.ToStringArrayOutput(privateSubnetIds),
		VpcId:            awsCluster.vpc.ID(),
		InstanceRoles: iam.RoleArray{
			awsCluster.nodeIamRole,
		},
		ClusterSecurityGroupTags: mergeDefaultTags(pulumi.StringMap{
			"Name":    pulumi.String(awsCluster.Name),
			"Cluster": pulumi.String(awsCluster.Name),
		}),
		Tags: mergeDefaultTags(pulumi.StringMap{
			"Name":    pulumi.String(awsCluster.Name),
			"Cluster": pulumi.String(awsCluster.Name),
		}),
		SkipDefaultNodeGroup: pulumi.BoolPtr(true),
		RoleMappings: eks.RoleMappingArray{
			eks.RoleMappingArgs{
				// TODO: Probably need to create a special role for this
				RoleArn:  pulumi.String("arn:aws:iam::REDACTED:role/FairwindsAdministrator"),
				Groups:   pulumi.StringArray{pulumi.String("kubernetes-admins")},
				Username: pulumi.String("arn:aws:iam::REDACTED:role/FairwindsAdministrator"),
			},
		},
		ProviderCredentialOpts: eks.KubeconfigOptionsArgs{
			RoleArn: pulumi.String("arn:aws:iam::REDACTED:role/FairwindsAdministrator"),
		},
	}, pulumi.Provider(awsCluster.provider))
	if err != nil {
		return err
	}
o
Where awsCluster is a...?
f
_suddenly very worried I'm doing everything wrong šŸ˜› _
Copy code
type Cluster struct {
	Name            string          `yaml:"name"`
	Region          string          `yaml:"region"`
	VpcCidr         string          `yaml:"vpcCidr"`
	Subnets         []Subnet        `yaml:"subnets"`
	StaticNodeGroup StaticNodeGroup `yaml:"staticNodeGroup"`
	AddOns          AddOns          `yaml:"addOns"`

	// pulumi created objects to find the outputs later
	provider        *aws.Provider
	vpc             *ec2.Vpc
	igw             *ec2.InternetGateway
	azList          []string
	cluster         *eks.Cluster
	staticNodeGroup *eks.ManagedNodeGroup
	nodeIamRole     *iam.Role
}
o
Btw, to use Hashi Vault secrets directly (not via code but by Pulumi engine/config) which might solve some issues: https://gist.github.com/viveklak/05344c9c684dce4dea41bb09915903e0#file-operatorwithvaultsecretmanager-md
Oh yeah, I'm not sure what that is
f
That Cluster struct is custom to my code I saw that, and that's great for encrypting the secrets. But this is aws sts credentials for accessing the AWS API
o
Ohh okay
f
Yeah, sorry, not super clear šŸ˜„
o
So you've built a provider via aws.NewProvider on that struct
f
Yeah.
So I think I need the generated kubeconfig to set aws auth env variables in the kubeconfig with the actual credentials from that provider I created with aws.NewProvider. Which normally wouldn't be safe, but I'm only ever going to use that particular kubeconfig in my code, never export it
If eks.KubeConfigOptionsArgs supported arbitrary
env
, I could probably do that.
o
If you look at the logs on the operator or Pulumi service console, do you see which resource or step is failing?
f
Essentially every step that tries to run a kubectl command.
kubernetes:core/v1:ConfigMap
eks:index:VpcCni
o
Interesting. I wonder what the issue is - EKS should be generating its own internal kubeconfig & provider to access the cluster
I think we're at the limit of my knowledge of the provider, but it'd be worth searching for that or creating an issue on the Pulumi EKS GitHub repo. @sparse-park-68967 any other advice?
f
Appreciate the help. I'm going to keep tinkering to see if I can come up with something. If I can't, I'll file an issue. This just may be an unnecessarily complicated credential config too šŸ˜„
I drew a picture of what I'm trying to do, not sure if it's helpful, but I needed it for my own thinking as well.
o
That makes sense, tbh
Are you wrapping the values you get from Vault in
pulumi.Secret()
, by the way?
s
Hi just catching up here
are you able to take the operator out of the equation to simplify for now? e.g. try to do it locally instead and see if the problem persists?
Ah I think you just opened https://github.com/pulumi/pulumi-eks/issues/712 - thanks for doing that. It does seem like there is a context gap in terms of truly discovering the authentication mechanism and the limited options supported in passing the authentication context to the kubernetes provider created by EKS. I will get that prioritized
šŸ‘ 1
f
Thank you! Friel, I am not wrapping those currently, just sticking them into an AWS provider. Should I be doing that first?
o
Yeah, we recommend any external matter you want to ensure is encrypted in our state files & redacted in logs is passed through our Secret wrapper.
f
Cool. Thanks for the heads up.
I'm using vault/aws from pulumi-vault to retrieve the secrets. Looking at state, it seems like that's handling this for me. Am I correct on that?