Hrm. just renaming a aws:ec2:SecurityGroupRule see...
# aws
f
Hrm. just renaming a awsec2SecurityGroupRule seems to hit "A duplicate Security Group rule was found on" / https://github.com/hashicorp/terraform/pull/2376 / "Error message: the specified rule ... already exists". The actual terraform github link seems like a red herring. Of course... wow. just nuked all the rules sitting in front of my db
l
This problem is most often found when you're renaming the pulumi SecurityGroupRule resource. The best approach is to remove the rule from code (comment it out),
pulumi up
, then put the code back in and edit it before
pulumi up
again.
The Terraform docs are copied by Pulumi, and that link isn't Pulumi-specific. It can give some background help, since it's pointing at the resources that Pulumi builds its provider on.
To recover from this, you might try commenting out all the rules (inline and associated), up, put them back, and up again. This should clear out conflicts before getting things back to the correct state.
Avoiding using variables in security group rule names (and similar resources like NACL rules) can reduce the risk of this happening.
f
Yeah I'm not renaming a rule. I'm just changing a rule or adding rules to add a new CIDR. I can't clear them out because they front real, actual traffic. I'd rather replace the rule with a "allow all from all ports" momentarily, but it sounds like that requires a staged/transactional process that pulumi doesn't support without dynamic backends?
l
If the Pulumi name is changing, then you'll see this behaviour. If it's not, then you won't. If you are doing something like building the rules in a loop, where the name is based on the loop ID and the port is based on a set, then you can get situations where "rule-1" changes from 80 to 443 and "rule-2" changes from 443 to 80. This is effectively renaming, and causes this problem.
If you are naming the Pulumi resources dynamically, then the best I've come up with is to name them strictly after the properties in the rule. So that you have "rule-egressTcp80" or similar. This almost always avoids the create-before-delete conflict that you're seeing.
f
I'm just adding to the cidrBlocks but maybe awsx.ec2.SecurityGroup is looping?
a
Howdy, I'm late on this, but I wanted to note that I ran into this behavior using normal terraform a while ago (2018ish?) We found that terraform was doing a "delete all rules -> recreate all rules" thing for us, and what would happen is we would create a very broad rule (e.g. 10.0.0.0/8) and then later we'd try to create a new rule that "fell under" that rule due to CIDR notation (e.g. 10.100.0.0/16) and had the same port/protocol, the AWS API would throw this error at us, and terraform would exit from a user perspective this looked exactly like terraform just nuked all of the rules but one, or however many rules it got through before encountering these pseudo-duplicates