When I mount an EFS volume on a container on ECS, ...
# general
w
When I mount an EFS volume on a container on ECS, do you need to explicitly create security groups and IAM policies or will pulumi handle this for me?
So far I've got:
Copy code
const repo = new awsx.ecr.Repository(`${projectName}-ecr-repo`);

const storageDeviceName = `${projectName}-efs-db`;

const vpc = new awsx.ec2.Vpc(`${projectName}-vpc`);

const storage = new efs.FileSystem(storageDeviceName, {
    encrypted: true,
    performanceMode: 'maxIO',
});

const mountTargets = vpc.privateSubnetIds.map((subnetId,index) =>
    new efs.MountTarget(`${projectName}-efs-mount-target-${index}`, {
        fileSystemId: storage.id,
        subnetId,
    })
);

const volumeName = 'database-efs';

const cluster = new awsx.ecs.Cluster(`${projectName}-ecs-cluster`,{ vpc });

cluster.createAutoScalingGroup(`${projectName}-asg`, {
    subnetIds: vpc.publicSubnetIds,
    templateParameters: {
        minSize: 0,
        maxSize: 2,
        desiredCapacity: 1,
    },
    launchConfigurationArgs: {
        instanceType,
        // ephemeralBlockDevices: [{
        //     deviceName: storageDeviceName,
        //     virtualName: 'ephemeral0',
        // }],
    }
});

const service = new awsx.ecs.EC2Service(`${projectName}-ecs-service`, {
    os: 'linux',
    deploymentMaximumPercent: 100,
    deploymentMinimumHealthyPercent: 0,
    cluster,
    waitForSteadyState: false,
    taskDefinitionArgs: {
        networkMode: 'bridge', // EFS does not support awsvpc
        volumes: [
            {
                name: volumeName,
                efsVolumeConfiguration: {
                    fileSystemId: storage.id,
                }
            }
        ],
        containers: {
           app: {
               memory: config.app.memory,
               image: awsx.ecs.Image.fromDockerBuild(repo.repository, {
                   context: meteorDirectory,
                   args: {
                       NODE_VERSION: getVersion(meteorDirectory, 'meteor --allow-superuser node --version'),
                       METEOR_VERSION: getVersion(meteorDirectory, 'meteor --allow-superuser --version'),
                       SOURCE_FILES,
                   },
                   cacheFrom: true,
               }),
               environment: [
                   {
                       name: 'MONGO_URL',
                       value: '<mongo://database:27017/meteor>',
                   },
                   {
                       name: 'PORT',
                       value: `${config.app.port}`,
                   },
               ],
               dependsOn: [
                   {
                       condition: 'START',
                       containerName: 'database',
                   }
               ]
           },
           database: {
               memory: config.database.memory,
               image: `mongo:${config.database.mongoVersion}`,
               mountPoints: [
                   {
                       sourceVolume: volumeName,
                       containerPath: '/data',
                   }
               ]
           }
        }
   }
});
But i get
Post <http://%2Frun%2Fdocker%2Fplugins%2Famazon-ecs-volume-plugin.sock/VolumeDriver.Create>: context deadline exceeded
m
Did you ever figure this out?
w
yes i did
needed security groups to allow NFS access.
👍 1
Copy code
const PORTS = {
  nfs: new ec2.TcpPorts(2049),
};
export function addIngressRule(
  sg: ec2.SecurityGroup,
  name: string,
  source: SecurityGroupRuleLocation,
  port: ec2.SecurityGroupRulePorts | keyof typeof PORTS,
  description?: string
): ec2.SecurityGroupRule {
  if (typeof port === "string") {
    port = PORTS[port];
  }

  return ec2.SecurityGroupRule.ingress(name, sg, source, port, description);
}

export function allowEfsAccess(
  sg: ec2.SecurityGroup,
  subnets: ec2.Subnet[]
): ec2.SecurityGroupRule {
  return addIngressRule(
    sg,
    "efs-nfs",
    {
      cidrBlocks: subnets.map(({ subnet }) => subnet.cidrBlock),
    },
    "nfs",
    "Allow EFS connection"
  );
}
m
Thank you for that as well, you’ve saved me a lot of time 🙂