Hi all, I'm trying to deploy my app to EKS. When I...
# aws
s
Hi all, I'm trying to deploy my app to EKS. When I run pulumi up, it got hanged if creating deployment, services, ... Any idea about the issue?
s
If you are creating an EKS cluster and then using the K8s provider to deploy resources to that cluster in the same Pulumi program, you have to use what's called an explicit provider: The "default" K8s provider will use the Kubeconfig on your system. If you're creating the cluster and deploying resources to it in the same program, obviously that Kubeconfig won't exist until after the cluster is provisioned. This code snippet demonstrates how to create a cluster and deploy K8s resources to it in the same program: https://github.com/pulumi/workshops/blob/main/aws-getting-started-k8s-ts/index.ts#L24-L74
If that's not your issue, can you post a code snippet or some CLI output that explains the issue?
s
yes I did but still got the hanged issue
Copy code
import pulumi
import pulumi_awsx as awsx
import pulumi_eks as eks
import pulumi_kubernetes as kubernetes

# Create a VPC for our cluster.
vpc = awsx.ec2.Vpc("vpc")

# Create an EKS cluster inside of the VPC.
cluster = eks.Cluster("cluster",
    vpc_id=vpc.vpc_id,
    public_subnet_ids=vpc.public_subnet_ids,
    private_subnet_ids=vpc.private_subnet_ids,
    instance_type="t2.micro",
    max_size=6,
    min_size=2,
    desired_capacity=3,
    node_associate_public_ip_address=False)

eks_provider = kubernetes.Provider("eks-provider", kubeconfig=cluster.kubeconfig_json)

# Deploy a small canary service (NGINX), to test that the cluster is working.
my_deployment = kubernetes.apps.v1.Deployment("my-deployment",
    metadata=kubernetes.meta.v1.ObjectMetaArgs(
        labels={
            "appClass": "my-deployment",
        },
    ),
    spec=kubernetes.apps.v1.DeploymentSpecArgs(
        replicas=2,
        selector=kubernetes.meta.v1.LabelSelectorArgs(
            match_labels={
                "appClass": "my-deployment",
            },
        ),
        template=kubernetes.core.v1.PodTemplateSpecArgs(
            metadata=kubernetes.meta.v1.ObjectMetaArgs(
                labels={
                    "appClass": "my-deployment",
                },
            ),
            spec=kubernetes.core.v1.PodSpecArgs(
                containers=[kubernetes.core.v1.ContainerArgs(
                    name="my-deployment",
                    image="nginx",
                    ports=[kubernetes.core.v1.ContainerPortArgs(
                        name="http",
                        container_port=80,
                    )],
                )],
            ),
        ),
    ),
    opts=pulumi.ResourceOptions(provider=eks_provider))
Please help check if I missed anything
s
Try just
kubeconfig
instead of the JSON. Also, you need to make sure your worker nodes (? - I think it's the worker nodes that fetch the image) have 443 egress enabled because otherwise they won't be able to get the container image from the container registry. That may be enabled by default, but it's worth checking via the console if necessary.
I'm not 100% sure the 443 thing applies to EKS. I am 100% sure it applies to ECS on Fargate.
s
thank you, but i still got the same problem if using only kubeconfig
q
Are your nodes able to connect to the internet? You set
node_associate_public_ip_address
to false, and populated both public and private subnets (IIRC the cluster will launch the nodegroup into the public subnet in this case). EC2 instances in a public subnet need a public IP address in order to access the internet. Can you try setting
node_associate_public_ip_address
to true instead? Alternatively, launch the nodes into a private subnet
s
I still get the same issue
I copied from the sample in the pulumi webiste but not working
can share an working example?
q
I stand corrected, they're using private subnets by default. So that should be fine. Can you check in the AWS EKS console to confirm the nodes get successfully registered with the EKS cluster?