I am having a problem with the pulumi spot fleet r...
# general
f
I am having a problem with the pulumi spot fleet request, maybe someone can help. Everything works as expected, except when i try to destroy the stack. Because pulumi doesn't wait for the ec2 instances of the spot request to be deleted, it starts romoving the VPC which removes the internet gateway and all other associated resources. It then waits until the subnet can be deleted which happens after the actual instance is terminated. Becuase of this behaviour, the ec2 instance cannot trigger an http call that I have configured on termination. (no internet gateway, no security group to allow any traffic). Anyone has any ideas how to tell pulumi to basically remove spot fleet request, then maybe wait 2 minutes, then continue to remove the vpc and its resources?
l
No. This is normally done by using multiple projects. It can be wrapped into a single automation-api app, but you probably need multiple Pulumi projects.
f
Alright, so the idea is that I would first setup the VPC then request the spot fleet on pulumi up. Then when destroying, I'd run pulumi destroy on the fleet, wait until it is done, then remove the VPC ? How would this be orchestrated this via the automation api?
l
You can call any number of projects, either internal or external, from a single file in automation-api
h
Have you tried using
dependsOn
for the CustomResourceOptions of the spot fleet? If you force the spot fleet to depend to the vpc pulumi should ensure the spot fleet is destroyed before it starts destroying the vpc.
l
The problem is
Because pulumi doesn't wait for the ec2 instances of the spot request to be deleted
So it's already doing everything in the right order (which is where
dependsOn
would help), it's just too asynchronous and is treating "response to delete request received" as "delete request completed".
f
I managed to solve this by using automation api. I had to create a dummy stack to be able to query aws resources (the ec2 instances) once i detected they were deleted, moved to delete the vpc. Quite a workaround but it doesn't overcomplicate the solution.
🙌 1