f

    fresh-notebook-40503

    2 months ago
    Having an issue with
    eks.Cluster
    . The EC2 instances are being created, but they are not being associated with the EKS cluster. Specifically, the default node group is not being created. Also, the OIDC provider is created, but it is not associated with the EKS cluster. Everything was working earlier, but then I created a new AWS account and started using AWS profiles to stand up infrastructure in the new account. All other components are being created properly in the new AWS account
    const eksCluster = new eks.Cluster("eks-cluster", {
        vpcId: vpc.id,
        publicSubnetIds: vpc.publicSubnetIds,
        privateSubnetIds: vpc.privateSubnetIds,
        nodeAssociatePublicIpAddress: false,
        instanceType: "m5.large",
        desiredCapacity: 2,
        minSize: 2,
        maxSize: 4,
        createOidcProvider: true,
        roleMappings: [
          {
            groups: ["system:masters"],
            roleArn: clusterAdminRole.arn,
            username: "pulumi:admin-user",
          }
        ],
        providerCredentialOpts: {
          profileName: aws.config.profile,
        }
      });
    Has anyone run into this problem before?
    basically,
    pulumi up
    hangs for a while on "waiting for pods to be ready", and then I get errors like
    Minimum number of live Pods was not attained
    . Makes sense - there are no nodes associated with the cluster. The default node group is not being created, even though the EC2 instances were created
    b

    billowy-army-68599

    2 months ago
    do the nodes get created in the same AWS account?
    i've never seen anything like this before
    f

    flat-laptop-90489

    2 months ago
    I've had many different reasons that node groups don't end up getting attached to clusters. Outside of missing IAM roles, networking is a big one. Usually though, I have to ssh to the node and go digging through boostrap script logs, kubelet logs, etc.
    f

    fresh-notebook-40503

    2 months ago
    the nodes are successfully being created in the same AWS account, but they are not being associated with the eks cluster. good call! I'll ssh into the created nodes and look through the logs there. thanks!
    ah i think i've got it. the nodes can't reach the public API of the k8s cluster. the nodes are in a private subnet, and the route table for the private subnet has a 0.0.0.0/0 entry pointing to a NAT Gateway. Unfortunately, the NAT Gateway is in a private subnet. It needs to be in a public subnet with a 0.0.0.0/0 entry pointing to the internet gateway of the VPC
    great, that was the problem. all fixed. thanks for the help!