hi, having a problem here with nodes not being abl...
# kubernetes
d
hi, having a problem here with nodes not being able to communicate with the dns service on another node. wondering if it’s the security group policy thats causing the problem. I used eks.Cluster to create my cluster and eks.NodeGroup to create a nodeGroup for the cluster. is it looks like a security group is created automatically when i do that. do i need to do anything else? to elaborate a bit more, my problem is pods running on nodes without the dns service cannot reach the dns service at all.
b
@dry-teacher-74595 what CNI are you using? the default?
can you share your code>
d
yea i was using the default, it looks like it was a problem with the security groups? eks.Cluster created a security group, and eks.NodeGroup created another, once i let them talk to each other it seems to have worked. whats the best way of setting this up?
Copy code
const cluster = new eks.Cluster("cluster", {
        name: "formations",
        subnetIds: vpcs.dev.publicSubnetIds,
        vpcId: vpcs.dev.id,
        desiredCapacity: 2,
        maxSize: 4,
        minSize: 1,
        storageClasses: "gp2",
        providerCredentialOpts: {
            roleArn: formations_config.require("eksRole")
        },
        version: "1.21"
    });

    const nodeGroup = new eks.NodeGroup("eks-nodegroup", {
        cluster: cluster,
        minSize: 2,
        maxSize: 6,
        version: "1.21"
    })
this is what i have now
b
how did you create your VPC? does it have working routing in the public subnet (ie, can you ping one node from another?)
d
the vpc is created with all default parameters
Copy code
dev: new awsx.ec2.Vpc("dev", {}),
i think it can ping 1 node from another….but i also havnt tried. our previous application was a single elasticbeanstalk instance so i dont think anyone tested that
b
There's a lot that could be wrong here, Kubernetes is very complex...