kind-house-12874
05/10/2023, 9:31 AM@pulumi/eks
would handle that for us.
Here is a piece of Typescript we use to perform the deployment. Some internal details are redacted:
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";
import * as pulumi from "@pulumi/pulumi";
import * as eks from "@pulumi/eks";
const accountRoleArn = "arn...";
const accountId = "...";
const subAccountProvider = new aws.Provider("accountProvider", {
allowedAccountIds: [accountId],
assumeRole: {
roleArn: accountRoleArn,
sessionName: "PulumiSession",
externalId: pulumi.getProject(),
},
});
const vpc = new awsx.ec2.Vpc(
"vpc",
{
numberOfAvailabilityZones: 2,
enableDnsHostnames: true
},
{ provider: subAccountProvider }
);
const eksCluster = new eks.Cluster(
"test-cluster",
{
name: "test-cluster",
vpcId: vpc.vpcId,
publicSubnetIds: vpc.publicSubnetIds,
privateSubnetIds: vpc.privateSubnetIds,
nodeAssociatePublicIpAddress: false,
nodeSubnetIds: vpc.privateSubnetIds,
publicAccessCidrs: ["...REDACTED..."],
version: "1.22",
instanceType: "t3a.medium",
desiredCapacity: 2,
minSize: 1,
maxSize: 2,
providerCredentialOpts: {
roleArn: accountRoleArn // Assumed role in sub-organization
},
},
{ provider: subAccountProvider }
);
pulumi up
successfully completes and I’m able to connect to the EKS cluster with `kubectl`but it has no nodes running.publicAccessCidrs
limits the Kubernetes API server endpoint access to the specified CIDR blocks, If those blocks do not allow access from within the VPC, the nodes can’t join the cluster. To allow access to the Kubernetes API from inside VPC you can set endpointPrivateAccess
to true
.
More details in https://docs.aws.amazon.com/eks/latest/APIReference/API_VpcConfigRequest.htmlgray-electrician-97832
05/22/2023, 5:10 PMnode_associate_public_ip_address=False
prevented the nodes from either being created or joining the cluster, not sure which. That was while allowing the default values for endpoint_private_access
(default false) and endpoint_public_access
(default true).