Hello everyone, question here regarding EKS Fargat...
# aws
a
Hello everyone, question here regarding EKS Fargate profiles and subnets. In short, I am getting the error "Subnet <subnet-name> provided in Fargate Profile is not a private subnet" but, as far as I'm aware, I already have set it to private? Has anyone else encountered this before and figured out how to solve it? Hopefully I'm not missing something simple šŸ¤”
In my Pulumi script I create a subnet using the following:
Copy code
zones = get_availability_zones()
subnet_octet = [192, 128, 64, 0]
subnet_ids = []

for zone in zones.names:
    vpc_subnet = ec2.Subnet(
        f"vpc-subnet-{zone}",
        assign_ipv6_address_on_creation=False,
        vpc_id=vpc.id,
        map_public_ip_on_launch=False,
        cidr_block=f"10.100.{subnet_octet[len(subnet_ids)]}.0/21",
        availability_zone=zone,
        tags=util.make_tags(
            Name=f"eks_inf-{zone}",
        ),
    )
    ec2.RouteTableAssociation(
        f"vpc-route-table-assoc-{zone}",
        route_table_id=eks_route_table.id,
        subnet_id=vpc_subnet.id,
    )
    subnet_ids.append(vpc_subnet.id)
And sometime later I try to create a Fargate Profile using:
Copy code
eks.FargateProfile(
    "fargate_profile",
    cluster_name=eks_cluster.id,
    pod_execution_role_arn=execution_role.arn,
    subnet_ids=vpc.subnet_ids,
    selectors=[
        {
            "namespace": "example",
        }
    ],
)
l
From this page: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html
If a subnet is associated with a route table that has a route to an internet gateway, it's known as a public subnet.
Check the route table in the subnet. Does it have an IGW?
a
Hey @little-cartoon-10569, thanks for the tip and you're right I was using an InternetGateway. I guess I missed this line šŸ˜• . But anyway, it's still not clear to me how best to solve this issue? My first guess would be to create a NatGateway instead of an InternetGateway, and to then attach this to the RouteTable. I've tried this, and was able to see an active EKS cluster running with Fargate (yay!). See the code in the next message for more info. Nevertheless, this approach has a few downsides in my opinion: ā€¢ Creating a NatGateway also requires creating a new (public) subnet in the VPC -> Bloating the number of subnets ā€¢ The new Subnet (and thus the NatGateway) is specific to a single availability-zone -> If that one AZ goes down, my whole cluster can't speak to the internet (even if it is spread out among multiple AZ's) Do you have any ideas of how these issues could be avoided? Maybe there is another way to create private subnets which doesn't require a NatGateway
Code mentioned in the above message:
Copy code
################################
## Make Internet Gateway
igw = ec2.InternetGateway(
    f"vpc-ig",
    vpc_id=vpc.id,
    tags=util.make_tags(
        Name=f"eks_inf-ig-{util.env}",
    ),
)
eks_route_table = ec2.RouteTable(
    f"vpc-public-route-table",
    vpc_id=vpc.id,
    routes=[{"cidr_block": "0.0.0.0/0", "gateway_id": igw.id}],
    tags=util.make_tags(
        Name=f"eks_inf-public_rt-{util.env}",
    ),
)


################################
## Make Public subnet
zones = get_availability_zones()
public_zone = zones.names[0]
vpc_public_subnet = ec2.Subnet(
    f"vpc-public_subnet-{public_zone}",
    assign_ipv6_address_on_creation=False,
    vpc_id=vpc.id,
    map_public_ip_on_launch=True,
    cidr_block=f"{VPC_SUBNET_HEADER}.0.0/21",
    availability_zone=public_zone,
    tags=util.make_tags(
        Name=f"eks_inf-public_subnet-{util.env}-{public_zone}",
    ),
)
ec2.RouteTableAssociation(
    f"vpc-public-route-table-assoc-{public_zone}",
    route_table_id=eks_route_table.id,
    subnet_id=vpc_public_subnet.id,
)


################################
## Make Internal-facing NAT Gateway

eip = ec2.Eip(
    "eks_nat_gateway_ip_allocation",
    tags=util.make_tags(
        Name=f"eks_inf-elastic_ip-{util.env}",
    ),
)
pulumi.export("eip.id", eip.id)

ngw = ec2.NatGateway(
    "nat-gateway",
    allocation_id=eip.id,
    subnet_id=vpc_public_subnet.id,
    tags=util.make_tags(
        Name=f"nat-gateway-{util.env}-{public_zone}",
    ),
)

eks_private_route_table = ec2.RouteTable(
    f"vpc-private-route-table",
    vpc_id=vpc.id,
    routes=[
        {
            "cidr_block": "0.0.0.0/0",  # Is it okay for two route tables to have overlapping CIDR blocks??
            "nat_gateway_id": ngw.id,
        }
    ],
    tags=util.make_tags(
        Name=f"eks_inf-private_rt-{util.env}",
    ),
)

################################
## Make private subnets, one for each AZ in a region

subnet_octet = [
    192,
    128,
    64,
]  # 0]
subnet_ids = []

for zone in zones.names:
    vpc_subnet = ec2.Subnet(
        f"vpc-private_subnet-{zone}",
        assign_ipv6_address_on_creation=False,
        vpc_id=vpc.id,
        map_public_ip_on_launch=False,
        cidr_block=f"{VPC_SUBNET_HEADER}.{subnet_octet[len(subnet_ids)]}.0/21",  # 2047 addresses in each subnet
        availability_zone=zone,
        tags=util.make_tags(
            Name=f"eks_inf-private_subnet-{util.env}-{zone}",
        ),
        opts=pulumi.ResourceOptions(depends_on=[ngw]),
    )
    ec2.RouteTableAssociation(
        f"vpc-route-table-assoc-{zone}",
        route_table_id=eks_private_route_table.id,
        subnet_id=vpc_subnet.id,
    )
    subnet_ids.append(vpc_subnet.id)

pulumi.export("vpc.subnet_ids", subnet_ids)
l
Not a clue, sorry. I would look in the awsx.ec2.Vpc code and see what it does: it creates private subnets and NAT gateways according to AWS guidelines, in multiple AZs.
c
@agreeable-ram-97887, AWS recommends(Note section) creating nat gateways in each az to avoid the issue. There is a tradeoff of cost(additional nat gateways) vs availability.