white-rainbow-89327
06/15/2022, 3:30 AMatlastCidrBlock
in a managed mongodbatlas.NetworkPeering
resource using the Node version of pulumi. The option seems to have no effect and the created resource seems to always have the default value of 192.168.248.0/21
.
export const atlasPeer = new mongodbatlas.NetworkPeering('atlas-peer', {
providerName: "AWS",
awsAccountId: AWS_ACCOUNT_ID,
vpcId: VPC_ID,
projectId: ATLAS_PROJECT_ID,
containerId: mongoCluster.containerId,
accepterRegionName: AWS_REGION,
routeTableCidrBlock: '10.11.0.0/16',
atlasCidrBlock: '192.168.232.0/21',
});
This snippet creates a stack output of
{
"accepterRegionName": "us-west-2",
"atlasCidrBlock": "192.168.232.0/21",
"atlasGcpProjectId": "",
"atlasId": "XXXXXXXXXX",
"atlasVpcName": "",
"awsAccountId": "999999999",
"azureDirectoryId": "",
"azureSubscriptionId": "",
"connectionId": "pcx-XXXXXXXX",
"containerId": "XXXXXXXXXX",
"errorMessage": "",
"errorState": "",
"errorStateName": "",
"gcpProjectId": "",
"id": "XXXXXXXXX",
"networkName": "",
"peerId": "XXXXXXXXX",
"projectId": "XXXXXXXX",
"providerName": "AWS",
"resourceGroupName": "",
"routeTableCidrBlock": "10.11.0.0/16",
"status": "",
"statusName": "PENDING_ACCEPTANCE",
"urn": "urn:pulumi:staging::journey::mongodbatlas:index/networkPeering:NetworkPeering::atlas-peer",
"vnetName": "",
"vpcId": "vpc-00000000"
}
...with an atlasCidrBlock
of 192.168.232.0/21
, though the created peering connection instead has the default Atlas CIDR of 192.168.248.0/21
, as does its corresponding route table entry on my aws side.
Interestingly, the stack output's value for atlasCidrBlock doesn't even change across pulumi refresh
operations.
Is it possible I'm doing something wrong? If not, is this most likely a bug with pulumi or with atlas?
Thanks!echoing-dinner-19531
06/15/2022, 12:18 PM