Hi just started out using pulumi and am playing ar...
# aws
l
Hi just started out using pulumi and am playing around with pulumi. I want to use a second ebs volume as a persistent storage for an ec2 instance. So i’ve set it up like this.
Copy code
data_volume = ebs.Volume("Ghost Data",
                         availability_zone=default_az,
                         size=10,
                         type="gp3",
                         tags={
                             "Name": "Ghost data"
                         }
                         )

ghost_instance = ec2.Instance("Ghost",
                              instance_type="t4g.small",
                              ami=default_ami.id,
                              root_block_device=ec2.InstanceRootBlockDeviceArgs(
                                  delete_on_termination=True,
                                  volume_type="gp3",
                                  tags={
                                      "Name": "Ghost root device"
                                  }
                              ),
                              availability_zone=default_az,
                              vpc_security_group_ids=[
                                  security_groups.sg_web_access.id,
                                  security_groups.sg_ssh_access.id,
                                  security_groups.sg_all_outbound.id
                              ],
                              key_name="chipnibbles-aws-keys",
                              tags={
                                  "Name": "Ghost Instance"
                              }
                              )

second_disk = ec2.VolumeAttachment("Second Disk",
                                   device_name="/dev/sdh",
                                   volume_id=data_volume.id,
                                   instance_id=ghost_instance.id,
                                   force_detach=True
                                   )
The problem I’m facing right now is when I’m trying to update the
ghost_instance
pulumi fails on the Volume Attachment replacement.
Copy code
View Live: <https://app.pulumi.com/Regrau/chipnibbles-infrastructure/dev/updates/26>

     Type                         Name                            Status                   Info
     pulumi:pulumi:Stack          infrastructure-dev  **failed**               1 error
 +-  └─ aws:ec2:VolumeAttachment  Second Disk                     **replacing failed**     [diff: ~instanceId]; 1 error
 
Diagnostics:
  aws:ec2:VolumeAttachment (Second Disk):
    error: 1 error occurred:
        * Error attaching volume (vol-07e61ae7f6062403f) to instance (i-002f5a8d58c0bb000), message: "vol-07e61ae7f6062403f is already attached to an instance", code: "VolumeInUse"
I’m still not quite sure if the problem is with pulumi or AWS itself. I get the error code, but it seems wrong that pulumi does not detach the volume before changing the attached instance id. Why is that a limitation and are there any workarounds? Can anybody help me out here please? It is worth mentioning that I want to mount the second ebs volume for database storage. I’d use EFS but it would be to slow for that purpose.
l
I think you need to add the
deleteBeforeReplace
opt to the attachment. https://www.pulumi.com/docs/intro/concepts/resources/#deletebeforereplace
FYI, big messages like this can be made more channel-friendly by 1) Putting them in threads, below a shorter intro message, and 2) Using text snippets, so they become collapsible (and can be syntax highlighted, which is helpful too).
Generally, Pulumi tries to create replacements before deleting the original. This is helpful most of the time, to ensure continuity of service, security, etc. (removing an SG from your prod server before adding the new one could be risky). But sometimes the provider won't allow two similar things exist at the same time (as in this case), so you have to tell Pulumi to delete the old one before creating the new one.
l
Many thanks tenwit, the option seems to have done it. Now I’ll have to check if there are any implications for data consistency. As for the formatting of the message, I’ll note that for the future! Thanks for the info.
👍 1
w
I don't think you want
force_detach
then?
deleteBeforeReplace
should be sufficient...
l
Yeah I did consider removing it already, thanks for the hint.