This message was deleted.
# aws
s
This message was deleted.
b
Copy code
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";

if (!process.env.IMAGE_TAG ) {
  throw Error(`Failed to execute due to missing docker tags. The docker image tags must be provided via the environment in order to deploy all services.
   Make sure to set the following ENV variables.
   IMAGE_TAG
  `);
}



const cluster = new aws.ecs.Cluster("cluster", {});
const lb = new awsx.lb.ApplicationLoadBalancer("lb", {});
const service = new awsx.ecs.FargateService("service", {
    cluster: cluster.arn,
    assignPublicIp: true,
    desiredCount: 2,
    taskDefinitionArgs: {
        container: {
            // This image is built entirely seperately from pulumi inside of a github action, so I would like to pull the image tag from these
            image: process.env.IMAGE_TAG,
            cpu: 512,
            memory: 128,
            essential: true,
            portMappings: [{
                targetGroup: lb.defaultTargetGroup,
            }],
        },
    },
});
export const url = lb.loadBalancer.dnsName;
p
as you prefer, I use both approaches
on some projects pulumi runs on every merge on other projects I have a manual step to update the infra and handle deployment using other tools
in this case infra-update is a manual step in the pipeline
b
Could you tell me a bit about the tradeoffs you have encountered with doing both of these approaches. I would also be curious to hear how your manual step works. We have been doing the using pulumi runs on every merge but that takes ~12-15 minutes to deploy 3 different FargateServices to ECS which is a rather long time.
p
I deploy on fargate in a couple of minutes using Pulumi
b
Yea we experience the same thing but we have (now 4) different services apart of the same pulumi project (3-4 minutes each). The resolution seems to be to split up the pulumi project.