I deployed a service/load balancer within kubernet...
# aws
i
I deployed a service/load balancer within kubernetes cluster using the default VPC. I wanted to have the cluster use fargate so that I don’t have to worry about provisioning. Since fargate requires a private subnet, I needed to create a new VPC and deploy the cluster under this new VPC. After I did this though, the service/load balancer returned an empty reply when queried. Anybody know why? Code in thread.
Copy code
from typing import Literal, Optional, Sequence, Union

import pulumi_aws as aws
import pulumi_awsx as awsx
import pulumi_eks as eks
import pulumi_kubernetes as k8s
from config.pulumi_secrets import get_pulumi_secrets
from utils.constants import S3_BUCKET

import pulumi

THEIA = "theia"
vpc = awsx.ec2.Vpc(f"{THEIA}-vpc")

# Create an EKS cluster with the default configuration.
cluster = eks.Cluster(
    f"{THEIA}-prod",
    fargate=True,
    vpc_id=vpc.vpc_id,
    private_subnet_ids=vpc.private_subnet_ids,
)


app_name = THEIA
app_labels = {"app": app_name}

repo = awsx.ecr.Repository(THEIA)

image = awsx.ecr.Image(
    THEIA,
    repository_url=repo.url,
    path="..",
)

config = pulumi.Config()
ENVVARS = [
    k8s.core.v1.EnvVarArgs(name=env, value=value)
    for env, value in (tuple(get_pulumi_secrets().items()) + (("ENV", "prod"),))
]


def _get_pod_spec_args(
    *,
    command: Sequence[str],
    restart_policy: Union[
        Literal["Always"], Literal["OnFailure"], Literal["Never"]
    ] = "Always",
    liveness_probe: Optional[pulumi.Input[k8s.core.v1.ProbeArgs]] = None,
) -> k8s.core.v1.PodSpecArgs:
    return k8s.core.v1.PodSpecArgs(
        containers=[
            k8s.core.v1.ContainerArgs(
                name=app_name,
                image=image.image_uri,
                env=ENVVARS,
                command=command,
                liveness_probe=liveness_probe,
                resources=k8s.core.v1.ResourceRequirementsArgs(
                    requests={
                        "cpu": "1",
                        "memory": "4Gi",
                    }
                ),
            )
        ],
        restart_policy=restart_policy,
    )


monitoring_server_deployment = k8s.apps.v1.Deployment(
    f"{app_name}-monitoring-server",
    spec=k8s.apps.v1.DeploymentSpecArgs(
        selector=k8s.meta.v1.LabelSelectorArgs(match_labels=app_labels),
        replicas=1,
        template=k8s.core.v1.PodTemplateSpecArgs(
            metadata=k8s.meta.v1.ObjectMetaArgs(labels=app_labels),
            spec=_get_pod_spec_args(
                command=["python", "monitoring/server.py"],
            ),
        ),
    ),
)

monitoring_service = k8s.core.v1.Service(
    f"{app_name}-monitoring-service",
    spec=k8s.core.v1.ServiceSpecArgs(
        type="LoadBalancer",
        selector=app_labels,
        ports=[k8s.core.v1.ServicePortArgs(port=80)],
    ),
)




# Export the URL for the load balanced service.
pulumi.export(
    "url",
    monitoring_service.status.load_balancer.ingress[0].hostname,
)

# Export the cluster's kubeconfig.
pulumi.export(
    "kubeconfig",
    cluster.kubeconfig,
)
b
which subnets do your loadbalancers get provisioned in?
I believe you need to pass your public subnet ids so that EKS knows which subnets to provision loadbalancers in https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#publicsubnetids_nodejs
i
When I created the VPC, I created 6 subnets
My ALB is attached to the 3 public subnets
I am trying to pass the public subnet ids though and seeing if that fixes the issue
On some other debugging information, I execed into the SSH cluster and I curl-ed the load balancer using the internal cluster ip of the load balancer, the curl succeeded
And when I query the load balance from my personal computer, I get an empty reply — not a “Could not connect error”
This implies that the connection is being blocked by something
b
what address does your loadbalancer have
i
Give me a minute, I am recreating the cluster because I am passing the public_subnet_ids
(Also, thanks for the help!)
Here is the address
b
okay, if you check the loadbalancer, is it attached to your kubernetes nodes?
i
I think so because I queried the load balance within the EKS cluster I got the response I would expect from one of my kubernetes nodes
b
can you check in the console?
i
When I check from the console I get “There are no instances registered to this load balancer”
But I guess I kind of expect that because fargate doesn’t create EC2 instances?
b
you can’t attach an ELB to fargate
you need to use an ALB
(that link is for ECS, but its a limitation of fargate in general)
so you’ll need a network load balancer or to install the ALB ingress controller
i
Copy code
monitoring_service = k8s.core.v1.Service(
    f"{app_name}-monitoring-service",
    spec=k8s.core.v1.ServiceSpecArgs(
        type="LoadBalancer",
        selector=app_labels,
        ports=[k8s.core.v1.ServicePortArgs(port=80)],
    ),
)
How do I specify an ALB instead of an ELB?
Actually, I think I can figure this out by myself
Thanks for the help!
Actually couldn’t figure that out 😞
Actually, found that we can use a “load_balancer_class” parameter, but it’s not clear what is the universe of acceptable inputs there
Made an issue:
b
@icy-pilot-31118 eks doesn't natively support an alb. You'll need to install the AWS load balancer controller
This is essentially the complexity you get when using Kubernetes unfortunately, it's not inherently a Pulumi problem
i
Oh ok gotcha. I guess that installing an AWS load balancer controller is complicated?
b
not necessarily, it can be done with a helm chart This is a third party resource so I can’t comment on its quality: https://www.learnaws.org/2021/06/22/aws-eks-alb-controller-pulumi/