It feels like the nodes are not getting added to t...
# getting-started
c
It feels like the nodes are not getting added to the eks cluster
s
Are you assigning them to the correct cluster with proper security group access?
c
As far as I know. Here’s what I’ve got for the security group:
Copy code
func createEKSSecurityGroup(ctx *pulumi.Context, vpc *ec2.LookupVpcResult) (*ec2.SecurityGroup, error) {
	// Create a Security Group that we can use to actually connect to our cluster
	clusterSg, err := ec2.NewSecurityGroup(ctx, "cluster-sg", &ec2.SecurityGroupArgs{
		VpcId: pulumi.String(vpc.Id),
		Egress: ec2.SecurityGroupEgressArray{
			ec2.SecurityGroupEgressArgs{
				Protocol:   pulumi.String("-1"),
				FromPort:   <http://pulumi.Int|pulumi.Int>(0),
				ToPort:     <http://pulumi.Int|pulumi.Int>(0),
				CidrBlocks: pulumi.StringArray{pulumi.String("0.0.0.0/0")},
			},
		},
		Ingress: ec2.SecurityGroupIngressArray{
			ec2.SecurityGroupIngressArgs{
				Protocol:   pulumi.String("tcp"),
				FromPort:   <http://pulumi.Int|pulumi.Int>(80),
				ToPort:     <http://pulumi.Int|pulumi.Int>(80),
				CidrBlocks: pulumi.StringArray{pulumi.String("0.0.0.0/0")},
			},
		},
	})
	if err != nil {
		return nil, err
	}

	return clusterSg, nil
}
Here’s what I’ve got for the EKS cluster creation:
Copy code
// Create an EKS cluster
	eksCluster, err := eks.NewCluster(ctx, "eks-flyte-cluster", &eks.ClusterArgs{
		RoleArn: pulumi.StringInput(eksClusterRole.Arn),
		VpcConfig: &eks.ClusterVpcConfigArgs{
			PublicAccessCidrs: pulumi.StringArray{
				pulumi.String("0.0.0.0/0"),
			},
			SecurityGroupIds: pulumi.StringArray{
				securityGroup.ID().ToStringOutput(),
			},
			SubnetIds: toPulumiStringArray(subnets.Ids),
		},
	})
	if err != nil {
		return nil, err
	}

	// Create the EKS Node Group
	// TODO - We need to update the scaling capabilities as a new argument to the function and make it user definable

	nodeGroupName := "fyte-eks-nodegroup-primary"
	_, err = eks.NewNodeGroup(ctx, nodeGroupName, &eks.NodeGroupArgs{
		ClusterName:   eksCluster.Name,
		NodeGroupName: pulumi.String(nodeGroupName),
		NodeRoleArn:   pulumi.StringInput(nodeGroupRole.Arn),
		SubnetIds:     toPulumiStringArray(subnets.Ids),
		ScalingConfig: &eks.NodeGroupScalingConfigArgs{
			DesiredSize: <http://pulumi.Int|pulumi.Int>(5),
			MaxSize:     <http://pulumi.Int|pulumi.Int>(5),
			MinSize:     <http://pulumi.Int|pulumi.Int>(2),
		},
		// Currently fixing the AMI to the latest Amazon Linux 2 AMI
		AmiType: pulumi.String("AL2_x86_64"),

		// TODO - Figure out how we need to setup the instance sizes
		InstanceTypes: pulumi.StringArray{
			pulumi.String("t2.nano"), // Replace with your desired instance type(s)
		},

		// TODO - Add SSH Key
		// RemoteAccess: &eks.NodeGroupRemoteAccessArgs{
		// 	Ec2SshKey: pulumi.String("my-ssh-key"), // Replace with your desired SSH key name
		// },
	})
	if err != nil {
		return nil, err
	}

	ctx.Export("kubeconfig", generateKubeconfig(eksCluster.Endpoint,
		eksCluster.CertificateAuthority.Data().Elem(), eksCluster.Name))

	return eksCluster, nil
s
the control plane needs to allow port 443 from the nodes. but also you may want to reconsider allowing ingress from
0.0.0.0/0
c
Ingress to the Cluster ?
s
yeah,
0.0.0.0/0
ingress is generally not recommended for a publicly accessible cluster for security reasons, but it depends on what you're using it for. if it's just a personal test cluster it might be fine.
c
Just to clarify, is this ingress only for the security group or do I need to add some kind of an Ingress for the EKS Cluster too ?
s
EKS nodes need to be able to reach the EKS Cluster/Control-plane at port 443
c
Okay checking in the new config
Yeah still no dice, I feel like every attempt at eks clusters has been failing for me
So I went back and tried the vanilla example for deploying the EKS Cluster. After hardcoding the subnets and the default VPC, I managed to launch some nodes. Next I’m gonna try to launch this version of the code and figure out if everything is just breaking because fo the VPC selection.