Hi Lee Do you have a few mins? I am using crosswal...
# general
s
Hi Lee Do you have a few mins? I am using crosswalk for eks. As per https://www.pulumi.com/blog/crosswalk-for-aws-1-0/ if I set nodeAssociatePublicIpAddress as false for the cluster, the nodes will have private IPs, however this isn't working this way.
Copy code
export const cluster = new eks.Cluster(`${stack}-eks-cluster`, {
    name: `${stack}-eks-cluster`,
    skipDefaultNodeGroup: true,
    version: eksConfig.version,
    createOidcProvider: true,
    privateSubnetIds: vpc.privateSubnetIds,
    publicSubnetIds: vpc.publicSubnetIds,
    deployDashboard: false,
    storageClasses: eksConfig.volumeType,
    vpcId: vpc.id,
    useDefaultVpcCni: true,
    instanceRoles: [role],
    nodeAssociatePublicIpAddress: false,
    kubernetesServiceIpAddressRange: eksConfig.kubernetesServiceIpAddressRange,
    tags: {
        Environment: stack,
    }
});

eksConfig.nodegroup.forEach((nodeGroupDetail, index) => {
    eks.createManagedNodeGroup(`${stack}-managed-ng-${index}`, {
        cluster: cluster,
        nodeGroupName: `${stack}-managed-ng-${index}`,
        capacityType: nodeGroupDetail.capacityType,
        instanceTypes: [nodeGroupDetail.type],
        scalingConfig: {
            desiredSize: nodeGroupDetail.desiredCapacity,
            minSize: nodeGroupDetail.minSize,
            maxSize: nodeGroupDetail.maxSize,
        },
        labels: Object.fromEntries(nodeGroupDetail.labels.map(({ key, value }) => [key, value])),
        nodeRole: role,
    }, cluster);
});
I am expecting the nodes in the node group to use the private subnets, however it aint working that way. Am I missing something here?
b
Copy code
nodeAssociatePublicIpAddress: false,
This is for the default node group, and you’re create another managed node group
what is actually happening?
s
Ah okay, well the nodes in the managed node groups are getting ips in private and public subnets at random
Is there a similar setting for the managed node group?
b
first suggestion would be to disable the default node group
skipDefaultNodeGroup: true
s
I already have that property set to true
b
ah yes
how did you build your private subnet?
s
Copy code
export const vpc = new awsx.ec2.Vpc(`${stack}-eks-vpc`, {
    numberOfAvailabilityZones: eksConfig.vpc.length,
    cidrBlock: eksConfig.cidrRange,
    numberOfNatGateways: eksConfig.vpc.length,
    subnets: eksConfig.vpc.map((instance, index) => {
        return <awsx.ec2.VpcSubnetArgs[]>[{
            type: "private",
            name: `${stack}-private-eks-subnet-${index}`,
            location: {
                availabilityZone: instance.zoneName,
                cidrBlock: instance.privateCidr,
            },
            tags: {
                Environment: stack,
                "<http://kubernetes.io/role/internal-elb|kubernetes.io/role/internal-elb>": "1"
            }
        },
        {
            type: "public",
            name: `${stack}-public-eks-subnet-${index}`,
            location: {
                availabilityZone: instance.zoneName,
                cidrBlock: instance.publicCidr,
            },
            tags: {
                Environment: stack,
                "<http://kubernetes.io/role/elb|kubernetes.io/role/elb>": "1"
            }
        }]
    }).flatMap(s => s),
});
b
okay what does a node look like that isn’t behaving as expected?
s
well the node is getting assigned IPs from the public subnet. I was under the assumption that the nodeAssociatePublicIpAddress along with privateSubnetIds and publicSubnetIds when creating the cluster forces all nodes to be on private IPs
b
it should, there must be a configuration issue somewhere
s
Okay
Any pointers will be really helpful 😄
b
• check the autoscaling group your nodes are deployed into • verify the subnets the asg is targetting • check the subnet definition to ensure “auto assign public ip addresses” isn’t set • make sure all nodes in the group are in the same ASG
s
• I haven't created an autoscaling group, just the managed node group with a scaling config, only the desired size of the scaling config is created during pulumi up(and these nodes are the ones with random private/public ips)
The code I sent here is pretty much all the bits and pieces used for the creation of the vpc, EKS and its managed node groups. I can't seem to find anything there that suggests its a misconfig
b
short of a screen share or running the code myself, it’s difficult to say what the issue is. Unfortunately i don’t have cycles to dive deeper right now
s
All good, if at some point you have the time for a quick screen share please lemme know
@billowy-army-68599 one other thing, when I am making a change to the property nodeAssociatePublicIpAddress and doing a pulumi up, this change is not even getting picked up.
b
that’s because you disabled the default node group, so there’s nothing to make changes to 🙂
s
so this property won't impact the managed node group right? Is there an equivalent property on the managed node group?
On the managednodegroup I found the value.
Copy code
/**
     * Make subnetIds optional, since the cluster is required and it contains it.
     *
     * Default subnetIds is chosen from the following list, in order, if
     * subnetIds arg is not set:
     *   - core.subnetIds
     *   - core.privateIds
     *   - core.publicSublicSubnetIds
     *
     * This default logic is based on the existing subnet IDs logic of this
     * package: <https://git.io/JeM11>
     */
    subnetIds?: pulumi.Input<pulumi.Input<string>[]>;
I haven't set core.subnetIds explicitly, I guess since subnetIds have all the values, it uses all subnets for the managed node group
b
oh man, totally missed that in your code. yes, set the private Subnet IDs on the nodegroup
s
Yeah, I ll do that, but QQ if the core.subnetIds will have all the subnets anyway, won't the default subnetId always go to all the subnets in VPC regardless