https://pulumi.com logo
#aws
Title
# aws
a

aloof-sugar-9157

07/03/2023, 2:02 PM
Hello Everyone! I have a best practices question. We are migrating from Ansible to Pulumi. Our Dev environments in the past have used a common Fargate cluster (ECS Cluster) I am finding that when we have multiple tasks running in the cluster we are having issues destroying our ECS tasks because pulumi is waiting for the CLUSTER to be in an inactive state. This will not ever really happen as there will be dozens of valid other tasks running in the cluster at any given time. Would it be best to have each task running in its own cluster for Pulumi to better manage the destroy step?
l

limited-rainbow-51650

07/03/2023, 2:46 PM
Hello @aloof-sugar-9157, if you create the cluster and the tasks in the same Pulumi stack, when you destroy that stack, Pulumi tries to destroy the task(s) and the cluster. If the cluster should be shared by many tasks, create the cluster in a separate Pulumi program+stack and export the cluster information as stack output(s). From the program setting up the tasks, use a StackReference to resolve this information and deploy the tasks on the shared cluster. If you now destroy a stack managing one or more tasks, it will not touch the shared cluster.
a

aloof-sugar-9157

07/03/2023, 2:52 PM
Thank you!
s

salmon-account-74572

07/03/2023, 2:57 PM
What @limited-rainbow-51650 said! 😄 Also, you may find some of the recommendations/considerations for how to structure Pulumi projects in this article helpful: https://www.pulumi.com/blog/iac-recommended-practices-structuring-pulumi-projects/
a

aloof-sugar-9157

07/03/2023, 6:50 PM
Here is the error I get when destroying.
Copy code
Diagnostics:
  aws:ecs:Service (ITD-1234-wc-ecs-service):
    error: deleting urn:pulumi:ITD-1234::wirecare::aws:ecs/service:Service::ITD-1234-wc-ecs-service: 1 error occurred:
    	* waiting for ECS Service (arn:aws:ecs:us-east-1:877180728458:service/p-wirecare/ITD-1234-wc-ecs-service) delete: timeout while waiting for state to become 'INACTIVE' (last state: 'DRAINING', timeout: 20m0s)
We are creating the service cluster in a different stack set