b

    boundless-telephone-75738

    7 months ago
    Hi folks, I'm deploying a eks cluster, and trying to use a custom node group with only private ip's. I've set the api endpoint to be available in both private and public subnets, but the nodes still don't register with the cluster. Any hints as to what I'm doing wrong, or which log I need to look at to see why it fails to register, it worked with the default node group as long as I kept nodeAssociatePublicIpAddress to true, but setting that to false (a requirement from our security team) the nodes fails to register. I've been banging my head against this for a while now, and I'm sure I'm missing something stupid
    export const cluster = new eks.Cluster(clusterName, {
        storageClasses: {
            'gp2-encrypted': { type: 'gp2', encrypted: true },
        },
        instanceRoles: [stdNodegroupIamRole, spotNodegroupIamRole],
        name: clusterName,
        vpcId: vpcId,
        privateSubnetIds: privateSubnetIds,
        publicSubnetIds: publicSubnetIds,
        userMappings: createUserMapping(),
        useDefaultVpcCni: true,
        createOidcProvider: true,
        nodeAssociatePublicIpAddress: false,
        encryptionConfigKeyArn: keyAlias.then((k) => k.targetKeyArn),
        vpcCniOptions: {
            enablePrefixDelegation: true,
        },
        clusterTags: {
            Pulumi: 'true',
        },
        skipDefaultNodeGroup: true,
        clusterSecurityGroupTags: { ClusterSecurityGroupTag: 'true' },
        nodeSecurityGroupTags: { NodeSecurityGroupTag: 'true' },
        endpointPublicAccess: true,
        endpointPrivateAccess: true,
    });
    
    cluster.createNodeGroup('standard-ng', {
        nodeAssociatePublicIpAddress: false,
        minSize: 1,
        maxSize: 6,
        desiredCapacity: 2,
        instanceType: standardInstance,
        bootstrapExtraArgs:
            "--use-max-pods false --kubelet-extra-args '--max-pods=110'",
        instanceProfile: new aws.iam.InstanceProfile('ng-standard', {
            role: stdNodegroupIamRole.name,
        }),
        nodeSubnetIds: privateSubnetIds,
        labels: {
            ondemand: 'true',
        },
    });
    w

    worried-xylophone-86184

    5 months ago
    Hi Christoper ! Did you manage to fix this by any chance?
    I am facing a similar issue 😅
    I am making use of a managed node group to get
    eks_cluster = eks.Cluster(
        cluster_name,
        name=cluster_name,
        public_subnet_ids=list(public_subnets.values()),
        private_subnet_ids=list(private_subnets.values()),
        tags={"Name": cluster_name, "Stack": stack_name},
        vpc_id=vpc_id,
        version="1.21",
        instance_role=eks_ec2_role,
        skip_default_node_group=True,
    )
    
    
    node_group = eks.ManagedNodeGroup(
        node_group_name,
        cluster=eks_cluster.core,
        capacity_type="SPOT",
        instance_types=["t3a.medium"],
        node_group_name=node_group_name,
        node_role=eks_ec2_role,
        tags={"Name": cluster_name, "Stack": stack_name},
        subnet_ids=list(private_subnets.values()),
        scaling_config=pulumi_aws.eks.NodeGroupScalingConfigArgs(
            desired_size=1,
            min_size=1,
            max_size=3,
        ),
    )
    b

    boundless-telephone-75738

    5 months ago
    Hi Sushant, sorry, I've had my slack notifications on mute for a long Easter holiday. I ended up having to allocate a public ip to my nodes for now, we've added an exception to our security rules for the one port that's being exposed by traefik for terminating https traffic. So no good solution found I'm afraid