Hi guys! I've been working on a Airflow stack in E...
# aws
Hi guys! I've been working on a Airflow stack in ECS for my company, and there is a step I don't know the best strategy to implement it. There is some dependency between some ECS containers going up when i'm doing a database migration that runs with a
airflow db upgrade
, but I need to run this command once some scripts stopped running inside the ECS containers, so this I have to watch for this containers to be ready after being replaced, and then run the command. What's the best strategy in this case to run a script inside a container after a task definition in ECS and it's container/task is replaced AND a process inside it's container already finished?
I would use two separate projects. One would set up your base infra including the longer-running scripts that cause the problem. The next would set up the application-specific infra, including your DB migration. That allows you to put a health check in your deployment script or similar, between running the two projects.
This could all be done very easily inside an automation-api program too. Though if you're not already using that, I wouldn't start just because of this factor.