This message was deleted.
s
This message was deleted.
m
Is it safe to assume you're using the awsx docker helper method? e.g.
repository.buildAndPushImage()
?
b
@millions-furniture-75402 nope
I'm not maybe i should be
m
Not necessarily.
b
One sec I'll share my configuration
m
okay
b
it's quite small
so far
@millions-furniture-75402
Copy code
import * as fs from 'fs';
import * as awsx from "@pulumi/awsx";
import * as aws from "@pulumi/aws";


const services = fs.readdirSync('./app').filter(path => /-service$/.test(path));

// Step 1: Create an ECS Fargate cluster.
const cluster = new awsx.ecs.Cluster("cluster");

// Step 2: Define the Networking for our service.
const alb = new awsx.lb.ApplicationLoadBalancer(
    "net-lb", { external: true, securityGroups: cluster.securityGroups });

const web = alb.createListener("web", { port: 80, external: true });

const getServiceSegment = (service) => service.replace(/-service$/, '');

services.forEach((service) => {

	const segment = getServiceSegment(service);

	// Build and publish a Docker image to a private ECR registry.
	const serviceImg = awsx.ecs.Image.fromPath(`${service}-img`, `./app/${service}`);

	// Create a Fargate service task that can scale out.
	const serviceTargetGroup = new aws.lb.TargetGroup(`${service}-target-group`, {
	    port: 80,
	    protocol: "HTTP",
	    vpcId: alb.vpc.id,
	    targetType: 'ip'
	});

	const service = new awsx.ecs.FargateService(service, {
	    cluster,
	    loadBalancers: [{
		containerName: `${service}-container`,
		containerPort: 80,
		targetGroupArn: serviceTargetGroup.arn

	    }],
	    taskDefinitionArgs: {
		containers: {
		    [`${service}-container`]: {
			portMappings: [{
			    containerPort: 80,

			}],
			image: serviceImg,
			cpu: 102 /*10% of 1024*/,
			memory: 50 /*MB*/,
		    }
		},
	    },
	    desiredCount: 3,
	});


	const listenerRule = new aws.lb.ListenerRule(`${service}-listener-rule`, {
	    listenerArn: web.listener.arn,
	    priority: 100,
	    actions: [{
		type: "forward",
		targetGroupArn: serviceTargetGroup.arn
	    }],
	    conditions: [
		{
		    pathPattern: {
			values: [`/${segment}`, `/${segment}/*`],
		    },
		}
	    ],
	});

})


// Step 5: Export the Internet address for the service.
export const url = web.endpoint.hostname;
@millions-furniture-75402 does this make sense?
it's very basic configuration... I haven't done much yet
m
Yeah, that makes sense.
There are downsides to this approach, rather than treating each service as a separate stack. I believe you're aware of some of the downsides already. Within your current approach, you might have some opportunity with
.fromDockerBuild
https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/awsx/ecs/#containers If you want to explore multiple stacks, maybe the automation API can help you manage that easier.
I'm not sure if this is still the case, but Pulumi preview required the docker image be built again so that it could diff the image hash with the previous state.
Another trick we have used was setting
DOCKER_BUILDKIT=1
, which may not be necessary with current versions of docker if it's implicit (I'm not sure) https://brianchristner.io/what-is-docker-buildkit/
Copy code
const applicationImage = containerRepository.buildAndPushImage({
  env: {
    DOCKER_BUILDKIT: "1",
  },
});
b
@millions-furniture-75402 it feels like there should be a way to retrieve a resource (any resource not just docker image) from cache instead of rebuilding it for situations like these.
Like my setup works fine
... it's just that if I expanded that to 100 services... the pulumi up command would be slow as hell trying to create all those resources
I'll look into these links you've provided
m
Essentially you'd be rolling your own cache check and injecting
ignoreChanges
dynamically if you have a hit
b
Hmmm Interesting
Alternatively if I create "micro-stacks" for each service
and reference the main stack for the LB output
m
That's ideal in some ways. You have separate lifecycle management for each of your services as a stack
b
How can I create an new environment (e.g. stage-2, dev-3) for the main stack and all micro-stacks?
m
What happens to the state of your "mega stack" if it fails during deployment on service 3 of 5?
b
Also How can I abstract this away from application developers don't have to see all the infrastructure code in their service directory?
m
You should namespace your stacks, e.g.
sandbox.us-east-1
,
dev.us-east-1
Create a new stack with
pulumi stack init <stackname>
b
right
the idea of just create a new environment for all the services in my megastack with a single command is appealing
m
The automation API can help you with managing many stacks all as one program. Bear in mind that the stack YAMLs are tightly coupled with the Pulumi CLI, even with the automation API.
b
I have to read more about the automation API
m
Here is a conversation with some information last time I discussed automation API with some one https://pulumi-community.slack.com/archives/C019YSXN04B/p1661373144801119
b
@millions-furniture-75402 it seems like I can create a stack for the LB and then create a "microstack" for each service that uses a stack reference to the LB
and then tie this together with the automation api
^^ this is the path forward I'm going to try
m
Yes, that sounds right. A popular pattern is to have a "base infrastructure" stack for that contains shared resources across projects or stacks. In some cases, your project will be complex or large enough that it should have it's own "base infrastructure".
Take for example, this dynamically generated gameserver stack that is using outputs from another "gameserver-shared" stack.
Copy code
const gameserverShared = new pulumi.StackReference(gameserverSharedName);

const gameserverSharedAlbId = gameserverShared.getOutput("albId");
const gameserverSharedAlbSecurityGroupId = gameserverShared.getOutput("albSecurityGroupId");

const alb = new awsx.lb.ApplicationLoadBalancer(`${resourcePrefix}-lb`, {
  loadBalancer: aws.lb.LoadBalancer.get(`${resourcePrefix}-lb`, gameserverSharedAlbId, { vpcId: vpc.vpc.id }),
  external: true,
  securityGroups: [gameserverSharedAlbSecurityGroupId],
  vpc: vpc.vpc,
});
Then you can take that alb resource and add to it, e.g.:
Copy code
const appTargetGroup = new awsx.lb.ApplicationTargetGroup(`${resourcePrefix}-tg`, {
  deregistrationDelay: 0,
  healthCheck: { path: "/health", port: "443", protocol: "HTTPS", matcher: "200" },
  loadBalancer: alb,
  port: 443,
  protocol: "HTTPS",
  vpc: vpc.vpc,
  tags: autotag,
});

const https = new awsx.lb.ApplicationListener(`${resourcePrefix}-https`, {
  listener: aws.lb.Listener.get(`${resourcePrefix}-https`, gameserverSharedListenerHttpsId),
  loadBalancer: alb,
  targetGroup: appTargetGroup,
  vpc: vpc.vpc,
});

new awsx.lb.ListenerRule(`${resourcePrefix}-lr`, https, {
  actions: [{ targetGroupArn: appTargetGroup.targetGroup.arn.apply(v => v), type: "forward" }],
  conditions: [{ hostHeader: { values: [fqdn] } }],
});

new aws.route53.Record(appName, {
  aliases: [{ evaluateTargetHealth: true, name: alb.loadBalancer.dnsName, zoneId: elbServiceHostedZoneId }],
  name: fqdn.apply(v => v),
  type: "A",
  zoneId: hostedZoneId,
});
note, my
vpc.vpc
is because I have a custom vpc component that contains the "regular"
vpc