I'm noticing that updating ECS task definitions ar...
# general
a
I'm noticing that updating ECS task definitions are quite slow on Pulumi free tier. I'm assuming things are much faster when paying for an org?
l
I don't believe that there's any sort of performance difference between the tiers. The Pulumi tiers affect support, and a few features in the Pulumi service, but nothing to do with the various cloud providers.
a
Interesting. I'm surprised by how long it takes to update task definitions. I've tried locally on my IP, locally with VPN, and in github actions runner. Taking 10+ minutes for a task definition with 4 containers. The images themselves haven't changed b/w runs. I'm using
awsx.ecs.FargateService
and defining each container in
taskDefinitionArgs
Although I don't think it actually matters that the images didn't change
l
Have you compared that to other ways of updating the task definitions? Is it ECS that is the limiting factor, or the code in Pulumi?
b
a
@little-cartoon-10569 I liked using
awsx
for simplicity because it is opinionated, but I will likely need to try some other approaches
FWIW, the new containers actually start fairly quickly, but it takes a number of minutes for the
pulumi up
to finish
l
My first thought is that the library code, whether in aws or awsx, is unlikely to be a factor since it's not a query that's been raised very often. It could be a temporary AWS infra problem (the APIs are returning slowly), or Pulumi project code (e.g. chains of `await`s instead of passing unresolved `Output`s).
a
My pulumi code is quite simple, no chains of
awaits
Observing in the console, i don't actually think that AWS is taking very long to launch the new task.
Pulumi is just taking a long time to say that it's finished. The only reason this sucks is because github actions bills by the minute
Interestingly, the old task is still running. Not sure why that is taking so long to tear down
q
Are you running the service with a load balancer and and a desired count > 0? The health check of the load balancer would affect when ECS considers the service to be healthy and running.
a
Yes indeed, I have an LB & port mappings on one of the containers. So basically the old task won't shut down until health check returns on the new one?
It's an
awsx.lb.NetworkListener
which strangely I dont see in these docs
q
There is a deployment configuration for the ecs service which determines how containers are deployed and replaced, which include a min and max healthy percent. Adjusting the max healthy percent would allow more new tasks to start at the same time.
🙏 1