hello everyone! I've created AWS stack with `ecs`...
# aws
c
hello everyone! I've created AWS stack with
ecs
(ec2 launch type),
vpc
,
load balacner
and
container registry
. I have minimum-size
fastapi
application (with literally 1 route saying "Hello world") When I create stack, it is up and running. When I change source code (say, return "Hello World 1!") and tag correspondingly, i see that: - new task is created - old task is inactive - old task is running, new task is not deployed Same issue is described here for terraform https://github.com/hashicorp/terraform/issues/11253 What i've tried: - in github issue above it is suggested to use dynamic port mapping. turned out not applicable for my stack, since I use
awsvpc
- changed max size of ec2 instances from 1 to 2 in autoscale group (weird, i know, just tried) - changed ec2 instance
from t2.micro
to
c6i.large
in case there is not enough memory I see no information in `Events`: when stack returns no error, there are no logs in
Events
in AWS console either, i just see inactive task. Any suggestions what else to look to? P.S. I would appreciate if anyone suggest how to at least view logs, since I can only view logs if running application, but not of tasks deployment.
b
are you shipping your logs to cloudwatch to see the output of the containers logs?
c
yes, I can see application logs, but i see only 200/404:
b
if the task is running, isn’t that the old task definition?
fwiw ECS is terrible for situations like this, can be very frustrating 😞
c
yes, it is running an old task:
b
ah, so you need to try find the logs for the new task, it’s likely to be crashing or not passing healthchecks. you can show the stopped tasks, and it should allow you to view the logs, mine is obviously empty, you should see something there
c
I have 1 task (as I guess an older revision), which runs and returns same logs: i.e. seems I don't have logs about healthchecks and deployment process, only logs of currently running container
and here is task runner (when step before) screen:
b
just as a shot in the dark, are you building the conatiner locally?
c
yes
b
Are you on a newer MacBook with an arm CPU?
c
yes, I am on M chip, but I've written architecture explicitly:
Copy code
fastapi_image = docker.Image("fastapi-dockerimage",
                             image_name=app_ecr_repo.repository_url.apply(lambda x: f"{x}:tag2"),
                             build=docker.DockerBuildArgs(
                                 context="..",
                                 dockerfile="../Dockerfile",
                                 platform="linux/amd64"
                             ),
                             skip_push=False,
                             registry=app_registry
                             )
b
ah, i’m stumped then I’m afraid 😞
c
may I ask you to look trough entire code, if you have enough time? maybe there is some obvious mistake that I am not paying attention to?
b
I don’t see anyth9ing that leaps out there
c
thank you very much, if i find solution, will post here)
It turns out, changing
ec2
instance type to
c6i.large
itself didn't trigger instance recreation. it affected
autoscaling group
parameters, so when I terminated instance (that still remained
t2.micro
), new one was created with type of
c6i.large
and all changes were applied. after that, each time I change code and run
pulumi up
, all changes are reflected in tasks as expected without any manual fixes. i.e. it was really a memory issue, even though my application was as tiny as possible, and one should check instance type when work with autoscale groups 🙃