:wave: Hi everyone - running into an issue using t...
# aws
a
👋 Hi everyone - running into an issue using the
awsx.ec2.Vpc
(Crosswalk) library. I'm creating a
/17
VPC, and a handful of subnets under it. I'm getting this error though:
Copy code
error: awsx:ec2:Vpc resource 'shared-non-prod' has a problem: Subnets are too large for VPC. VPC has 8192 addresses, but subnets require 10240 addresses.
A
/17
should have
32768
addresses- not
8192
- any idea how it's doing its calculations?
If it matters, I'm using the Python SDK
OK, I think it's the cidr alignments.. it's using the 32k and dividing it by 3 (per AZ). If I change
availability_zone_cidr_mask
to match the mask of the VPC, it seems to do what I need it to.
Well, that didn't work 😄 Will take a closer look
Can't seem to get this to do what I want... I am passing it this configuration for a
/17
VPC:
Copy code
private_subnet_cidr_blocks:
        - 10.212.0.0/19
        - 10.212.32.0/19
        - 10.212.64.0/19
      public_subnet_cidr_blocks:
        - 10.212.96.0/21
        - 10.212.104.0/21
        - 10.212.112.0/21
I have this in my `.VpcArgs()`:
Copy code
subnet_strategy=awsx.ec2.SubnetAllocationStrategy.EXACT,
            subnet_specs=[
                awsx.ec2.SubnetSpecArgs(
                    type=awsx.ec2.SubnetType.PRIVATE,
                    name="subnet-high",
                    cidr_blocks=args['private_subnet_cidr_blocks'],
                    tags={
                        'sec:domain': "High"
                    }
                ),
                awsx.ec2.SubnetSpecArgs(
                    # Setting aside for future use. May be removed if not needed.
                    type=awsx.ec2.SubnetType.PUBLIC,
                    name="subnet-mediation",
                    cidr_blocks=args['public_subnet_cidr_blocks'],
                    tags={
                        'sec:domain': "Mediation"
                    }
                )
            ]
But it's creating a bunch of
/20
subnets anyway.. Is there any way to get it to listen to what I'm asking for?
l
I've never used layout strategies before, but I note that Exact requires the entire range to be covered. If you've got a /17 VPC, then you need subnets to cover up to 10.212.127.254. But your highest subnet covers only up to 10.212.119.254. Maybe that's the problem? Try using Auto instead of Exact.
Or else add a dummy subnet to cover from 10.212.120.0 to 10.212.127.254.
s
@astonishing-oil-84546 If this continues to be an issue for you, please submit an issue here: https://github.com/pulumi/pulumi-awsx/issues
a
I think I figured out what's going on, and I just went with an explicit layout.. I think the gist of it is this, but haven't looked into the code to confirm: 1. I create a VPC with a /17 and request 3 AZs 2. a /17 has 32,768 addresses 3. Pulumi wants to evenly allocate the address space across the AZs- so it sort of "pretends" the VPC is a fraction of the overall space (ideally it'd be 1/3rd) 4. Network blocks are allocated based on 2^N, so you can't divide by 3, but you can divide by 4, so to cover its bets, Pulumi divides by 4, leaving a max of 8,192 addresses per AZ 5. I was trying to allocate a total of 10,240 addresses per AZ So, essentially, Pulumi is "wasting" 1/4 of the address space when you let it layout subnets automatically in a 3-AZ configuration. I can understand why it works that way, but a 3-AZ configuration is pretty common in AWS, and this does lead to some address space wastage.
> Or else add a dummy subnet to cover from 10.212.120.0 to 10.212.127.254. Thanks @little-cartoon-10569 - this is essentially what I did. It's a pity, because being able to pass in
cidrMask
or
size
would have been really nice, but not at the expense of throwing away 1/4 of my address space. I actually needed to add 3 /24s since I needed to match the AZ layout.
👍 1
@stocky-restaurant-98004 Apparently my corporate GitHub credentials won't let me create issues 😢 Here's what I wrote up:
Copy code
This may not be a bug, per se, but it is a UX issue. When trying to use automatic subnet allocations across 3 AZs, you're basically forced to throw away 1/4 of your address space.

Example:

1. Create a VPC with a /17 CIDR block (32,768 addresses)
2. Create 3 /19 private subnets
3. Create 3 /21 public subnets

This should fit evenly in a /17 with a bit of leftover space (basically a final /21). When attempting to do this, you receive an error:
error: awsxec2Vpc resource 'shared-non-prod' has a problem: Subnets are too large for VPC. VPC has 8192 addresses, but subnets require 10240 addresses.
Copy code
I think this is happening because network blocks don't divide easily by 3, so it's rounding up to 4 and dividing the address space that way (32,768/4 = 8192).

In AWS a 3-AZ network configuration is very common as this is the default minimum number of availability zones per region.

It'd be nice to use automatic subnet allocation, but not at the expense of throwing away 1/4 of the address space.
Pulumi
about
output:
Copy code
CLI
Version      3.191.0
Go Version   go1.25.0
Go Compiler  gc

Plugins
KIND      NAME          VERSION
resource  aws           7.7.0
resource  awsx          3.0.0
resource  docker        4.8.2
resource  docker-build  0.0.13
language  python        3.191.0

Host
OS       darwin
Version  13.7.6
Arch     arm64

This project is written in python: executable='/Users/Source/aws-platform-management-vpc/.venv/bin/python' version='3.13.3'

Dependencies:
NAME       VERSION
<corp>-vpc  0.1.0
l
Did you try Auto strategy instead of Exact?
s
@astonishing-oil-84546 I assume that's a restriction that comes from your employer? This is a new one to me, but I'm guessing that it's so you don't reveal which vendors your company works with?
Created this for you. LMK if you need me to make any edits: https://github.com/pulumi/pulumi-awsx/issues/1698
a
It's always a waste by 3 AZs because of the network cidr math. That's why a good dedicated layout of IP spaces needs to be done... Or you use ipv6 ;-)
a
@astonishing-oil-84546 I assume that's a restriction that comes from your employer? This is a new one to me, but I'm guessing that it's so you don't reveal which vendors your company works with?
Yeah, regrettably 😞
Issue looks good, thanks @stocky-restaurant-98004!
👍 1