Hey everyone, I am facing a weird issue with pulum...
# aws
p
Hey everyone, I am facing a weird issue with pulumi_eks python package. I created a cluster as below. As per the doc - “Cluster is a component that wraps the AWS and Kubernetes resources necessary to run an EKS cluster, its worker nodes, its optional StorageClasses, and an optional deployment of the Kubernetes Dashboard.:” https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#cluster BUT after creating the cluster, the nodegroups are also created (i can see them in ec2 console) but not attached to the cluster. I checked the EKS console and also tried to get the nodes using $ kubectl get nodes which returns empty. What could be the issue? I am flying blind here without any error output from pulumi. Pulumi says creation of cluster is successful. Appreciate if anyone could point me to the right direction.
Copy code
import pulumi
import pulumi_aws as aws
import pulumi_eks as eks

# Get some values from the Pulumi configuration (or use defaults)
config = pulumi.Config()
min_cluster_size = config.get_float("minClusterSize", 3)
max_cluster_size = config.get_float("maxClusterSize", 6)
desired_cluster_size = config.get_float("desiredClusterSize", 3)
eks_node_instance_type = config.get("eksNodeInstanceType", "t2.medium")

eks_vpc = aws.ec2.Vpc.get("<mydefaultVPC", "<vpc-ID>")  # reusing an existing VPC
public_subnet_ids = aws.ec2.get_subnets([
    {
        'name': 'vpc-id',
        'values': [eks_vpc.id]
    },
    {
        'name': 'subnet-id',
        'values': ['subnet-xxxxxxx', 'subnet-xxxxxx']
    }

])
private_subnet_ids = aws.ec2.get_subnets([
    {
        'name': 'vpc-id',
        'values': [eks_vpc.id]
    },
    {
        'name': 'subnet-id',
        'values': ['subnet-xxxxxxxxxxxxxxxxx', 'subnet-xxxxxxxxxxxxxxx']
    }
])
eks_cluster = eks.Cluster("eks-cluster",
                          # Put the cluster in the new VPC created earlier
                          vpc_id=eks_vpc.id,
                          # Public subnets will be used for load balancers
                          public_subnet_ids=public_subnet_ids.ids,
                          # Private subnets will be used for cluster nodes
                          private_subnet_ids=private_subnet_ids.ids,
                          desired_capacity=desired_cluster_size,
                          max_size=max_cluster_size,
                          min_size=min_cluster_size,
                          )
pulumi.export("kubeconfig", eks_cluster.kubeconfig)
pulumi.export("vpcId", eks_vpc.id)
pulumi.export("aws_provider", eks_cluster.aws_provider)
pulumi.export("name", eks_cluster.eks_cluster.id)
pulumi.export("default_node_group", eks_cluster.default_node_group)
b
this is unlikely to be a Pulumi issue. The AWS api to create nodegroups returned successfully. What’s likely happening here is that the nodes are starting up, but the kubelet which runs on the OS is unable to contact the control plane to connect to the cluster. I’ve seen this happen because: • the VPC isn’t configured correctly • security groups aren’t correct Long story short, you’ll need to get access to the hosts that have been launched and debug their connectivity to the control plane
p
Hey @billowy-army-68599 Thanks for your reply. let me check this out. When i used the exact same config with
eksctl create cluster
command, the nodegroups join the cluster without any problem.