Hello team, I'm having trouble with `pulumi refres...
# aws
Hello team, I'm having trouble with
pulumi refresh
hanging (over an hour without changing) when I run it. Prior, I ran a
pulumi cancel
and it returned 409 - the update has already completed. I'm using pulumi v3.121.0. I have checked my AWS credentials and they are working. I'm using pulumi to manage 129 resources which are 4 ECS tasks and all the related cluster/security setup.
pulumi delete
seems to hang as well. Are there any other commands I can try or is the only option to delete the stack items manually?
I was having an error before this
Copy code
error: 1 error occurred:
  	* updating urn:pulumi:staging::xx-mab::awsx:x:ecs:FargateTaskDefinition$aws:iam/rolePolicyAttachment:RolePolicyAttachment::xx-mab-xx-xx-service-staging-task-0cbb1731: 1 error occurred:
  	* doesn't support update
And could not resolve it with any Pulumi actions, so I made an "out of band" change to delete the task. I reverted my pulumi code changes to before the task even existed, but now refresh just hangs
this was after 15 minutes but I've tried waiting an hour
pulumi refresh -s staging --yes --logtostderr -v=9 > refresh.txt
In my experience when this sort of thing happens to me, it's because of some AWS dependency. For e.g. pulumi is trying to delete a subnet but there's a network interface somewhere which is using the subnet, therefore pulumi just freezes up and appears to do nothing (because it's waiting for AWS in the background) and eventually fails. Or, the pulumi account doesn't have the right permissions (this happens sometimes when permissions are locked down tigher than simply fully admin) That's just one example. Try look in your aws logs, and try performing the same tasks manually, using the same credentials. Sometimes this allows an error message or warning to surface in the AWS console that wans't visible programatically. Hope this helps
That is definitely not expected. Is it possible your AWS credentials expired during this operation? I believe there have been cases where upstream providers hang in certain cases when credentials expire. If that’s not it though, you could try grabbing verbose logs and see what is happening when the hang occurs. What you shared above is the stdout, but not the verbose stderr logs that the command you listed should have produced - which should be a lot more verbose!
Thank you @gifted-gigabyte-53859, @white-balloon-205 for the comments. My AWS credentials were the issue