Hi, I'm creating an ec2 instance, then an ebs vol...
# general
c
Hi, I'm creating an ec2 instance, then an ebs volume and a volume attachment, which looks like:
Copy code
new aws.ec2.VolumeAttachment(`wordpress-content-attachment`, {
    instanceId: ec2Instance.id,
    volumeId: dataVolume.id,
    deviceName: '/dev/xvdc',
    forceDetach: true
  })
When something changed in the instance, it will trigger a volume attachment replace too. You can see in the code sample how it's handled by pulumi, nothing changed in the code between the commands. Basically first it's say that volume in use, looks like it ignores forceDetach, then the second time realizes, that it's detached from the previous one, therefore unable to detach it and after a refresh everything is good. Is it normal?
Any idea about this? Almost the same thing happened when I tried to destroy the whole stack, I got a
Copy code
Plan apply failed: deleting urn:pulumi:sbx::k8s-wordpress::aws:ec2/volumeAttachment:VolumeAttachment::wordpress-content-attachment: Failed to detach Volume (vol-0a5dc19036ac2232d) from Instance (i-0ba8081cc3cc886b1): IncorrectState: Volume 'vol-0a5dc19036ac2232d'is in the 'available' state.
but when I checked on the console, the volume was already detached, a refresh solved this too
w
Sounds like you may need to set
skipDestroy: true
. ? See also https://github.com/terraform-providers/terraform-provider-aws/issues/1017.
Actually your case looks different than that. It appears you manually detached the volume outside of Pulumi? If so, then the refresh would indeed be needed before the destroy.
c
Don't feel I should use skipDestroy, I wanted to be destroyed, that's why I set
forceDetach: true
and I did not detached manually, outside of pulumi, in that case I know a refresh necessaary
looks like pulumi detaches it, but does not recognize this without refresh, so it's unable to fully destroy or replace it
w
looks like pulumi detaches it,
Which action are you taking with Pulumi that causes the detach?
c
First I changed the userdata on the ec2 instance, which triggered a volume attachment replace, second time simply tried to destroy the whole stack