I'm noticing that updating ECS task definitions ar...
# general
I'm noticing that updating ECS task definitions are quite slow on Pulumi free tier. I'm assuming things are much faster when paying for an org?
I don't believe that there's any sort of performance difference between the tiers. The Pulumi tiers affect support, and a few features in the Pulumi service, but nothing to do with the various cloud providers.
Interesting. I'm surprised by how long it takes to update task definitions. I've tried locally on my IP, locally with VPN, and in github actions runner. Taking 10+ minutes for a task definition with 4 containers. The images themselves haven't changed b/w runs. I'm using
and defining each container in
Although I don't think it actually matters that the images didn't change
Have you compared that to other ways of updating the task definitions? Is it ECS that is the limiting factor, or the code in Pulumi?
@little-cartoon-10569 I liked using
for simplicity because it is opinionated, but I will likely need to try some other approaches
FWIW, the new containers actually start fairly quickly, but it takes a number of minutes for the
pulumi up
to finish
My first thought is that the library code, whether in aws or awsx, is unlikely to be a factor since it's not a query that's been raised very often. It could be a temporary AWS infra problem (the APIs are returning slowly), or Pulumi project code (e.g. chains of `await`s instead of passing unresolved `Output`s).
My pulumi code is quite simple, no chains of
Observing in the console, i don't actually think that AWS is taking very long to launch the new task.
Pulumi is just taking a long time to say that it's finished. The only reason this sucks is because github actions bills by the minute
Interestingly, the old task is still running. Not sure why that is taking so long to tear down
Are you running the service with a load balancer and and a desired count > 0? The health check of the load balancer would affect when ECS considers the service to be healthy and running.
Yes indeed, I have an LB & port mappings on one of the containers. So basically the old task won't shut down until health check returns on the new one?
It's an
which strangely I dont see in these docs
There is a deployment configuration for the ecs service which determines how containers are deployed and replaced, which include a min and max healthy percent. Adjusting the max healthy percent would allow more new tasks to start at the same time.
🙏 1