When I create a VPC for using with ALB aws, How ca...
# python
d
When I create a VPC for using with ALB aws, How can I to create a VPC without any public subnets? I tried to set:
Copy code
nat_gateways=pulumi_awsx.ec2.NatGatewayConfigurationArgs(
            strategy=pulumi_awsx.ec2.NatGatewayStrategy.NONE,
        ),
But it still create them
vpc-st-public-1
,
vpc-st-public-2
,
vpc-st-public-3
as you see in this log:
Copy code
Updating (st):
     Type                                          Name              Status              Info
 +   pulumi:pulumi:Stack                           devops-st         created (0.00s)     35 messages
 +   └─ awsx:ec2:Vpc                               vpc-st            created (0.12s)     
 +      └─ aws:ec2:Vpc                             vpc-st            created (1s)        
 +         ├─ aws:ec2:Subnet                       vpc-st-public-1   created (10s)       
 +         │  └─ aws:ec2:RouteTable                vpc-st-public-1   created (0.51s)     
 +         │     ├─ aws:ec2:RouteTableAssociation  vpc-st-public-1   created (0.40s)     
 +         │     └─ aws:ec2:Route                  vpc-st-public-1   created (0.64s)     
 +         ├─ aws:ec2:Subnet                       vpc-st-private-2  created (0.90s)     
 +         │  └─ aws:ec2:RouteTable                vpc-st-private-2  created (0.54s)     
 +         │     └─ aws:ec2:RouteTableAssociation  vpc-st-private-2  created (0.33s)     
 +         ├─ aws:ec2:InternetGateway              vpc-st            created (0.63s)     
 +         ├─ aws:ec2:Subnet                       vpc-st-private-3  created (0.89s)     
 +         │  └─ aws:ec2:RouteTable                vpc-st-private-3  created (0.51s)     
 +         │     └─ aws:ec2:RouteTableAssociation  vpc-st-private-3  created (0.34s)     
 +         ├─ aws:ec2:Subnet                       vpc-st-private-1  created (0.90s)     
 +         │  └─ aws:ec2:RouteTable                vpc-st-private-1  created (0.52s)     
 +         │     └─ aws:ec2:RouteTableAssociation  vpc-st-private-1  created (0.34s)     
 +         ├─ aws:ec2:Subnet                       vpc-st-public-3   created (10s)       
 +         │  └─ aws:ec2:RouteTable                vpc-st-public-3   created (0.47s)     
 +         │     ├─ aws:ec2:RouteTableAssociation  vpc-st-public-3   created (1s)        
 +         │     └─ aws:ec2:Route                  vpc-st-public-3   created (0.57s)     
 +         └─ aws:ec2:Subnet                       vpc-st-public-2   created (11s)       
 +            └─ aws:ec2:RouteTable                vpc-st-public-2   created (0.51s)     
 +               ├─ aws:ec2:RouteTableAssociation  vpc-st-public-2   created (0.42s)     
 +               └─ aws:ec2:Route                  vpc-st-public-2   created (0.50s)
Here is the code:
Copy code
import pulumi_aws.ec2
from pulumi import log
import pulumi_awsx
from aws.eks.base.const import VPC_NAME, CLUSTER_TAG, AVAILABILITY_ZONE_NAMES, VPC_NAMES, \
    CIDR_BLOCKS, DEP_MODE

# VPC
def create_vpc(vpc_name=None, cidr_block=None):
    log.info('[base.vpc.create_vpc]')
    name = vpc_name or VPC_NAME
    cidr_block = cidr_block or CIDR_BLOCKS[DEP_MODE]
    vpc = pulumi_awsx.ec2.Vpc(
        name,
        cidr_block=cidr_block,
        subnet_specs=[
            pulumi_awsx.ec2.SubnetSpecArgs(
                type=pulumi_awsx.ec2.SubnetType.PRIVATE,
                tags={
                    CLUSTER_TAG: "owned",
                    'kubernetes.io/role/internal-elb': '1',
                    'vpc': f'{name}',
                },
            ),
            pulumi_awsx.ec2.SubnetSpecArgs(
                type=pulumi_awsx.ec2.SubnetType.PUBLIC,
                tags={
                    CLUSTER_TAG: "owned",
                    'kubernetes.io/role/elb': '1',
                    'vpc': f'{name}',
                },
            ),
        ],
        availability_zone_names=AVAILABILITY_ZONE_NAMES,
        nat_gateways=pulumi_awsx.ec2.NatGatewayConfigurationArgs(
            strategy=pulumi_awsx.ec2.NatGatewayStrategy.NONE,
        ),
        tags={"Name": name},
    )
    return vpc
s
NAT Gateways are for private subnets to access the internet, and have to be placed in public subnets. Specify a subnet spec that only includes Isolated subnets and set NAT Gateway strategy to "None".
But if you do not have public subnets, you cannot use a public-facing ALB (unless you have Transit Gateway set up or something).
The most common use case is that you want an ALB in public subnets and then place your workload in private subnets so that they can only access the internet via NAT. This example shows it for an ECS on Fargate workload: https://pulumi.awsworkshop.io/25_intro_modern_iac_python/40_ecs.html
d
Thanks, So If I need to expose services to the internet using ALB AWS, such as app1.example.com, app2.example.com, and these services also require access to the internet, Do I need to have public subnet? And I should set these options?
'<http://kubernetes.io/role/elb|kubernetes.io/role/elb>': '1'
and
Copy code
nat_gateways=pulumi_awsx.ec2.NatGatewayConfigurationArgs(
    strategy=pulumi_awsx.ec2.NatGatewayStrategy.SINGLE,
)
s
Yes you need public subnets, because the ALB has to have a public IP address, and the only way to get that public IP address is to be in a public subnet for a VPC-based service (like EKS).
All your EKS nodes should be in a private subnet tho. Nothing goes in a public subnet (usually, there are rare exceptions) but your load balancers. EKS can create and manage the LBs for you. That'll probably be easier.
A single NAT Gateway is good for cost savings if you're doing a proof of concept. For a production workload, you want 3 AZs (this is the default) and you'll want a NAT Gateway in each AZ. One per AZ (also the default) is more reliable. I A single NAT Gateway is cheaper.
d
@stocky-restaurant-98004 Do I need one Elastic IP for each public subnet I have? Because when I change it to
strategy=pulumi_awsx.ec2.NatGatewayStrategy.ONE_PER_AZ
, It tries to create three public Elastic IPs.
What are these Elastic IPs used for? In the scenario where the AWS Load Balancer Controller is used, aren't Amazon's own general public IPs used for its load balancer? For example gcloud have these general public IPs:
Copy code
216.239.36.21
216.239.38.21
216.239.32.21
216.239.34.21
https://artifacthub.io/packages/helm/aws/aws-load-balancer-controllerhttps://github.com/kubernetes-sigs/aws-load-balancer-controller/