Hey everyone, I’m managing an EKS cluster, but I don’t really understand the diff between the Manage...
a
Hey everyone, I’m managing an EKS cluster, but I don’t really understand the diff between the ManagedNodeGroup and NodeGroupV2, which one should we use when creating the EKS cluster an d custom node groups with austoscaling ?
c
Its... complex. Stick to ECS is my suggestion.
a
But both are very close no ?
m
There are two types of node groups in EKS: • Managed node groups are backed by an EC2 Auto scaling group that AWS provisions and manages on your behalf. Managed node groups are tagged so that they work with the cluster autoscaler (that you'll have to deploy to the EKS cluster yourself). • Self-managed nodes are EC2 instances that you deploy yourself and then add as nodes to the cluster
Unless you have specific requirements, the recommendation is to stick with managed node groups
a
Thx, But the NodeGroupV2 is also having an autoscaling config
m
Yes, the NodeGroupV2 also creates an EC2 Auto scaling group
a
Thanks, I’m with managed node groups, but I was wondering why in the Guide they were using NodeGroupV2
m
Not sure which guide you're referring to, do you have a link?
m
Also, note that placing the EC2 instances in auto scaling groups is a prerequisite for Kubernetes cluster autoscaling but on its own is not sufficient. You need the cluster autoscaler (or use Karpenter for scaling).
a
Ok, I understand a bit more now the diff between both. Will stick with managed node groups, but I’m having an issue with LivenessProb / readiness probe and I was wondering if it could be the issue but I think it come from the CNI (I don’t have the addon) I can’t reach the POD IP from inside the HostNode, I can reach 0.0.0.0:port but not <pod-ip>:port in the same node
m
Thanks, I’m with managed node groups, but I was wondering why in the Guide they were using NodeGroupV2
At first glance, I don't see why the tutorial would not work with a managed node group.
I’m having an issue with LivenessProb / readiness probe and I was wondering if it could be the issue but I think it come from the CNI (I don’t have the addon)
I can’t reach the POD IP from inside the HostNode, I can reach 0.0.0.0:port but not <pod-ip>:port in the same node
This sounds like a Kubernetes networking problem that's unrelated to the Pulumi resource or the flavor of node group you deploy. Maybe https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/ is a good start.
a
Yes, but this is weird that following the GUIDE I’m having this issue without touching anything else
m
Didn't you say that you don't use the Amazon VPC CNI plugin?
a
I just never Installed the ADDON for now, but it look like I’m using it I can see it in the aws-node daemonset
m
Yes, it's installed by default
a
I did the awsx VPC and eks Cluster + node groups nothing else. I will keep you up to date I’m deploying other apps to see if this is specific to an app, because I have argocd / nginx and they are working fine cert manager etc
106 Views