https://pulumi.com logo
#aws
Title
# aws
a

agreeable-ram-97887

03/11/2021, 5:11 PM
Hey all, I'm experiencing some weird behavior with
pulumi destroy
, where pulumi incorrectly thinks an AWS resource failed to destroy (when in fact, it had). The process goes something like this: 1. I call
pulumi up
on a stack which includes and AWS EKS Cluster. I am not an admin user on AWS, but nevertheless I have access to all EKS-relevant actions for resources with the proper tags. So the building succeeds without any issues 2. I then tear this stack down with
pulumi destroy
which fails for some unknown reason. The error message tells me that the EKS cluster failed to be destroyed due to a permissions issue (as a result, pulumi thinks the cluster still exists) 3. But I check on the AWS console, and can confirm that in fact the cluster HAS been properly destroyed. Looking into CloudWatch, it appears that the the pulumi destroy process successfully destroyed the EKS cluster, but then tried to do it again (which is then denied since the non-existent cluster of course does not have the proper tags which allow me to operate on it) 4. Any subsequent
pulumi destroy
call also fails for the same reason. Similarly
pulumi refresh
fails because pulumi would like to "describe" the cluster to determine it's state, which of course also fails due to the same tag-condition 5. The situation is thus stuck until a colleague who is an AWS admin calls either
pulumi destroy
or
pulumi refresh
on my behalf So has anyone else experienced similar behavior? Or are there any thoughts on what could be wrong here? It seems to me like it is a Pulumi issue (rather than an AWS permissions issue) since Pulumi mistakenly thinks the cluster fails to be destroyed
f

faint-table-42725

03/11/2021, 5:20 PM
If you’re able to repro and run with verbose logging (https://www.pulumi.com/docs/troubleshooting/) during the first
pulumi destroy
that would be helpful in diagnosing what permissions the provider thinks were missing and what operation it attempted to make. Please file an issue against
pulumi-aws
with the relevant details so we can take a closer look. Separately, as a workaround, you can
pulumi state delete <urn>
so that the resource is removed from the state.