Getting `pulumi up` errors trying to update AWS VP...
# general
f
Getting
pulumi up
errors trying to update AWS VPC security groups. I've added a new SG rule but looks like the change was interpreted as much more than a simple addition of a new rule.
Sample error:
Copy code
aws:ec2:SecurityGroupRule (internal-ingress-5):
    error: Plan apply failed: [WARN] A duplicate Security Group rule was found on (sg-062d63ab053bfe852). This may be
    a side effect of a now-fixed Terraform issue causing two security groups with
    identical attributes but different source_security_group_ids to overwrite each
    other in the state. See <https://github.com/hashicorp/terraform/pull/2376> for more
    information and instructions for recovery. Error message: the specified rule "peer: sg-062d63ab053bfe852, TCP, from port: 0, to port: 65535, ALLOW" already exists
Here's the diff of my change:
Copy code
diff --git vpc.ts vpc.ts
index 8762986..42c2aa2 100644
--- vpc.ts
+++ vpc.ts
@@ -1,63 +1,86 @@
 import * as awsx from "@pulumi/awsx"
 
 export const vpc = new awsx.ec2.Vpc("vpc", {
    numberOfAvailabilityZones: 3,
 })
 
 export const sgLoadBalancer = new awsx.ec2.SecurityGroup("load-balancer", {
     vpc: vpc,
     egress: [{
         protocol: "all",
         fromPort: 0,
         toPort: 65535,
         cidrBlocks: ["0.0.0.0/0"],
     }],
     ingress: [{
         protocol: "tcp",
         fromPort: 443,
         toPort: 443,
         cidrBlocks: ["0.0.0.0/0"],
         ipv6CidrBlocks: ["::/0"],
     }],
 })
 
+export const sgBastion = new awsx.ec2.SecurityGroup("bastion", {
+    vpc: vpc,
+    egress: [{
+        protocol: "all",
+        fromPort: 0,
+        toPort: 65535,
+        cidrBlocks: ["0.0.0.0/0"],
+    }],
+    ingress: [{
+        protocol: "tcp",
+        fromPort: 22,
+        toPort: 22,
+        cidrBlocks: ["0.0.0.0/0"],
+        ipv6CidrBlocks: ["::/0"],
+    }],
+})
+
 export const sgInternal = new awsx.ec2.SecurityGroup("internal", {
     vpc: vpc,
     egress: [{
         protocol: "all",
         fromPort: 0,
         toPort: 65535,
         cidrBlocks: ["0.0.0.0/0"],
     }],
     ingress: [{
+       protocol: "tcp",
+       fromPort: 22,
+       toPort: 22,
+       sourceSecurityGroupId: sgBastion.id,
+       description: "ssh",
+    }, {
        protocol: "tcp",
        fromPort: 8001,
        toPort: 8001,
        sourceSecurityGroupId: sgLoadBalancer.id,
        description: "web",
     }, {
        protocol: "tcp",
        fromPort: 8080,
        toPort: 8080,
        sourceSecurityGroupId: sgLoadBalancer.id,
        description: "gw",
     }, {
        protocol: "tcp",
        fromPort: 8114,
        toPort: 8114,
        sourceSecurityGroupId: sgLoadBalancer.id,
        description: "cw-proxy",
     }, {
        protocol: "tcp",
        fromPort: 8504,
        toPort: 8504,
        sourceSecurityGroupId: sgLoadBalancer.id,
        description: "file-proxy",
     }, {
        protocol: "tcp",
        fromPort: 0,
        toPort: 65535,
        self: true,
        description: "self",
     }],
 })
After a somewhat random sequence of
pulumi refresh
,
npm update
and
pulumi up
the SG updates have stabilized and there are no more errors