Ok, another question as I learn Pulumi... I don't have DNS for outside the cluster working correctly...
e
Ok, another question as I learn Pulumi... I don't have DNS for outside the cluster working correctly. If I hop on a pod, i can traceroute to a known ip but not lookup any external names.
Copy code
/ # traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
 1  10.0.110.156 (10.0.110.156)  0.008 ms  0.008 ms  0.003 ms
 2  240.2.140.15 (240.2.140.15)  549.432 ms  240.2.140.12 (240.2.140.12)  5.604 ms  5.598 ms
 3  242.6.125.3 (242.6.125.3)  6.360 ms  242.6.125.133 (242.6.125.133)  6.672 ms


/ # nslookup <http://google.com|google.com>
;; connection timed out; no servers could be reached
When I check my kube-dns settings, I see this
Copy code
❯ kubectl get svc -n kube-system

NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   172.20.0.10   <none>        53/UDP,53/TCP,9153/TCP   12h
Here is what i weird, all my networking is on the 10.x.x.x/16 and I have verified the subnets attached are 10.x.x.x. So I am not sure how this is being set. Also, I am not importing any coredns stuff into the cluster (only what is created)... Source code can be found here: https://github.com/number3ai/pegasus/blob/main/aws/eks/eks.ts and would love any help
m
This is unrelated to Pulumi, I think. The service's IPs are not IPs in your subnet, but virtual IPs (see https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/ and the notes under "Optional settings" of step 2 in the EKS documentation at https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
So the fact that the service sits at 172.20.0.10 is correct, EKS chooses a range from 172.16.0.0/12 since your VPC uses a CIDR from 10.0.0.0/8
(Unrelated note: You don't have to specify "dependsOn" in your code if your resource depends on an output already, the relationship is inferred by Pulumi.)
A few sanity checks that come to mind: • Can you resolve the hostname if you explicitly specify the nameserver?
nslookup <http://google.com|google.com> 8.8.8.8
• Did you enable DNS support in your VPC? It should be enabled by default, but better to double check. • If you launch an EC2 instance in a public subnet of your VPC, do you encounter the same problem? If so, you can rule out Kubernetes entirely and focus on your VPC. If it turns out that it is a Kubernetes problem, there's a helpful DNS debugging guide in the documentation.
e
great next steps, lmc
@modern-zebra-45309
Copy code
❯ kubectl run -i --tty --rm dns-test --image=busybox --restart=Never -- sh

If you don't see a command prompt, try pressing enter.
/ # 
/ # 
/ # nslookup <http://google.com|google.com> 8.8.8.8
Server:         8.8.8.8
Address:        8.8.8.8:53

Non-authoritative answer:
Name:   <http://google.com|google.com>
Address: 2607:f8b0:400a:80a::200e

Non-authoritative answer:
Name:   <http://google.com|google.com>
Address: 172.217.14.206
so if i force a DNS server then it works
so i have network connectivity out and on the VPC
is that what you are talking about?
m
No, there is also enableDnsSupport (but it defaults to
true
) which is not the same. See DNS attributes in your VPC in the AWS docs for what they mean and the effects they have.
Have you tried what happens on a "regular" Amazon Linux EC2 instance in the same subnet? I think it would be helpful as a baseline for you to know if it's the VPC or Kubernetes.
e
I stood up a EC2 instance yesterday on the private subnet without a public interface and it failed to resolve, doing the same thing today but on the public subnet (still no public interface) and seeing if this will resolve DNS ... will keep u2d
Ended up getting it fixed. I think it had something to do with having dnsHostnames set to true in the vpc. I removed it and everything worked