abundant-cpu-32345
11/21/2018, 4:26 PMDiagnostics:
aws:ecs:Service (django-app-worker):
error: Plan apply failed: 1 error occurred:
* updating urn:pulumi:backend-dev::django-deploy::cloud:service:Service$aws:ecs/service:Service::django-app-worker: timeout while waiting for state to become 'true' (last state: 'false', timeout: 10m0s)
aws:ecs:Service (django-app):
error: Plan apply failed: 1 error occurred:
* updating urn:pulumi:backend-dev::django-deploy::cloud:service:Service$aws:ecs/service:Service::django-app: timeout while waiting for state to become 'true' (last state: 'false', timeout: 10m0s)
Any idea where to start debugging this issue?white-balloon-205
abundant-cpu-32345
11/22/2018, 12:17 AMservice django-app-e087161 was unable to place a task because no container instance met all of its requirements. The closest matching container-instance c02faa85-e355-43c2-af55-925ffeba89ce encountered error "RESOURCE:ENI"
cloud-aws:acmCertificateARN arn:foo
cloud-aws:ecsAutoCluster true
cloud-aws:ecsAutoClusterInstanceType t2.small
cloud-aws:ecsAutoClusterMinSize 3
const cloud = require("@pulumi/cloud-aws");
const REDIS_URL = "foo";
const DATABASE_URL = "foo";
const AWS_ACCESS_KEY_ID = "bar";
const AWS_SECRET_ACCESS_KEY = "bar";
const dockerArgs = {
DATABASE_URL: DATABASE_URL,
AWS_ACCESS_KEY_ID: AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY: AWS_SECRET_ACCESS_KEY
};
let service = new cloud.Service("django-app", {
containers: {
django: {
build: {
context: "../",
args: dockerArgs
},
memoryReservation: 128,
ports: [{ port: 443, protocol: "https", targetPort: 8000 }],
// ports: [{ port: 8000 }],
environment: {
DATABASE_URL: DATABASE_URL,
CELERY_BROKER_URL: REDIS_URL,
CELERY_RESULT_BACKEND: REDIS_URL,
AWS_ACCESS_KEY_ID: AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY: AWS_SECRET_ACCESS_KEY
}
}
}
// replicas: 2
});
let workerService = new cloud.Service("django-app-worker", {
containers: {
celery: {
build: {
context: "../",
args: dockerArgs
},
command: ["celery", "-A", "app", "worker", "-l", "info"],
memoryReservation: 256,
environment: {
DATABASE_URL: DATABASE_URL,
CELERY_BROKER_URL: REDIS_URL,
CELERY_RESULT_BACKEND: REDIS_URL,
AWS_ACCESS_KEY_ID: AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY: AWS_SECRET_ACCESS_KEY
}
}
}
// replicas: 2
});
// export just the hostname property of the container frontend
exports.url = service.defaultEndpoint.apply(e => `http://${e.hostname}`);
white-balloon-205
awsvpc
networking mode, each task needs an ENI, and EC2 instances have very few ENIs.
See https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. EC2 instances have a limit to the number of elastic network interfaces that can be attached to them, and the primary network interface counts as one. For example, a c4.large instance may have up to three elastic network interfaces attached to it. The primary network adapter for the instance counts as one, so you can attach two more elastic network interfaces to the instance. Because each awsvpc task requires an elastic network interface, you can only run two such tasks on this instance type. For more information about how many elastic network interfaces are supported per instance type, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.
abundant-cpu-32345
11/22/2018, 12:22 AMwhite-balloon-205
would you recommend to add more nodes to the cluster or upgrade from t2.small to something bigger?When you run into this particular limitation - there's not a simple answer necessarily - though more smaller instances generally has more bang for the buck on this metric (ENI allocation doesn't scale at all linearly with size/cost).
t2.small
from 2 => 3 :-))abundant-cpu-32345
11/22/2018, 12:27 AM