https://pulumi.com logo
Title
k

kind-house-12874

05/10/2023, 9:31 AM
Hi! Continuing evaluation of Pulumi but having a slight issue when deploying an EKS cluster. Consider the following AWS organization structure: Root -> Sub-organization. We are deploying EKS in the sub-organization and other resources have been successfully deployed. When deploying the EKS cluster, everything seems to be fine as we can see the cluster, cloudformation stack, ec2 instances, security groups etc. in AWS console. However, the cluster reports no node groups and no nodes running although there is a bunch of EC2 instances launched by the cloudformation stack. Is there something missing that associates the EC2 instances to the created EKS cluster? I was thinking that
@pulumi/eks
would handle that for us. Here is a piece of Typescript we use to perform the deployment. Some internal details are redacted:
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";
import * as pulumi from "@pulumi/pulumi";
import * as eks from "@pulumi/eks";

const accountRoleArn = "arn...";
const accountId = "...";

const subAccountProvider = new aws.Provider("accountProvider", {
    allowedAccountIds: [accountId],
    assumeRole: {
      roleArn: accountRoleArn,
      sessionName: "PulumiSession",
      externalId: pulumi.getProject(),
    },
  });

const vpc = new awsx.ec2.Vpc(
  "vpc",
  {
    numberOfAvailabilityZones: 2,
    enableDnsHostnames: true
  },
  { provider: subAccountProvider }
);

const eksCluster = new eks.Cluster(
  "test-cluster",
  {
    name: "test-cluster",
    vpcId: vpc.vpcId,
    publicSubnetIds: vpc.publicSubnetIds,
    privateSubnetIds: vpc.privateSubnetIds,
    nodeAssociatePublicIpAddress: false,
    nodeSubnetIds: vpc.privateSubnetIds,
    publicAccessCidrs: ["...REDACTED..."],
    version: "1.22",
    instanceType: "t3a.medium",
    desiredCapacity: 2,
    minSize: 1,
    maxSize: 2,
    providerCredentialOpts: {
      roleArn: accountRoleArn // Assumed role in sub-organization
    },
  },
  { provider: subAccountProvider }
);
pulumi up
successfully completes and I’m able to connect to the EKS cluster with `kubectl`but it has no nodes running.
For others that might struggle with the same issue. As
publicAccessCidrs
limits the Kubernetes API server endpoint access to the specified CIDR blocks, If those blocks do not allow access from within the VPC, the nodes can’t join the cluster. To allow access to the Kubernetes API from inside VPC you can set
endpointPrivateAccess
to
true
. More details in https://docs.aws.amazon.com/eks/latest/APIReference/API_VpcConfigRequest.html
g

gray-electrician-97832

05/22/2023, 5:10 PM
I found that using
node_associate_public_ip_address=False
prevented the nodes from either being created or joining the cluster, not sure which. That was while allowing the default values for
endpoint_private_access
(default false) and
endpoint_public_access
(default true).