I want to define a Fargate `Cluster` in one Stack ...
# general
I want to define a Fargate
in one Stack and reference it in a
in another stack. How do I do this with awsx? The FargateService expects a whole Cluster argument, not just an ID or name. And I don't see how I can "get" an existing cluster with
. Do I have to export the whole cluster from the first stack?
You don’t have to export the whole cluster, I recall this being a little painful to figure out which resources you’ll need. Let me see if I still have my reference project I solved a similar problem with. Our cluster was declared in cloudformation and we could only use it as it, but had a service definition in a pulumi project use it.
Getting ComponentResources from one stack to another is not easy. There is no built-in way to do it. Each time I've migrated an awsx resource to aws one, this is been the (or at least, a) driving factor.
💯 1
Ah, I just found something mentioned in a Github issue: https://www.pulumi.com/docs/guides/crosswalk/aws/ecs/#using-an-existing-ecs-cluster
I will try this. No idea why I didn't find this in the docs myself.
Copy code
const cluster = new awsx.ecs.Cluster(`${appName}-cluster`,{
  cluster: aws.ecs.Cluster.get(`${appName}-local`, "sandbox-cluster"),
  vpc: vpc,
Any my service looked like:
Copy code
const appService = new awsx.ecs.FargateService(`${appName}-service`, {
  assignPublicIp: false,
  deploymentMinimumHealthyPercent: 0,
  deploymentMaximumPercent: 100,
  desiredCount: 1,
  securityGroups: [appSecurityGroup],
  subnets: vpc.privateSubnetIds,
  taskDefinitionArgs: {
    containers: {
      grafana: {
        image: awsx.ecs.Image.fromDockerBuild(`${appName}-image`, {
          context: "./src",
          dockerfile: "./src/grafana.Dockerfile",
        logConfiguration: {
          logDriver: "awsfirelens",
          options: {
            "Name": "tail",
            "region": "us-east-1",
            "auto_create_stream": "true",
        portMappings: [appTargetGroup],
    taskRole: applicationRole,
Thanks, I will try it 🙂
@millions-furniture-75402 How do you import the VPC from another stack? I can't get this to work, every VPC I import is missing subnets and other info from the original VPC.
You can't import awsx VPCs (the ones with subnets). You need to handle it manually, using separate aws VPCs and subnets :(
There is no way to reuse a VPC resource across stacks?
I found
which supposedly does what I want but I cannot pass my
input to it because it's not a string. I am very confused.
I guess if I could get an
I could make an
with it, right? But how...
Yes, you can import an aws VPC with a VPC id (it is a string output, you use that), and you can build an unmanaged awsx stack using that.
So you can achieve it. I found it easier to refactor my projects so that everything that needs my awsx VPC is in the one project, but that's not a requirement.
The awsx VPC class provides a method fromExistingId() for the 2nd task.
The first task is even easier: just pass the Output<string> into the Vpc.get() method. Don't use getVpc(), that does something different.
getVpc() is a wrapper around the AWS SDK EC2 client's getVpc() function, which returns a GetVpcResult. It's not often used in pure Pulumi code, it's for those outlying cases where you need extra values from the SDK that Pulumi doesn't expose.
@little-cartoon-10569 I tried using
but found that this is not really importing any of the VPC data like subnets:
Copy code
const vpcId = infra.requireOutputValue('vpcId');
const vpc = awsx.ec2.Vpc.fromExistingIds('staging', { vpcId });
export const vpcSubnets = vpc.publicSubnetIds;
vpcSubnets will be empty again
Same problem with
. It's as if none of them do any actual importing and they all just create an empty object with the vpcId.
Yes, that's right. The subnets are not loaded, just the VPC is.
The subnets aren't ever really loaded. Just promises to them. For your requirements, you will probably need to export the important subnet IDs from one project, and import them into another.
Hm that's unfortunate. I really expected I could get the same VPC object with all the stuff attached by using StackReference. I will try carrying the subnets over manually then and hope that's enough.
The subnets created by an awsx VPC aren't quite first class citizens even in the same stack. They're wrapped by promises. The code that creates them in the awsx package runs from within a
clause, rather than within the "engine" like all the other resources.
If you need to use subnets frequently, you may find that creating them yourself, and not using the awsx package, gives you more power and flexibility.
Here is an example using awsx package to build the vpc in typescript and then using that same vpc with ecs. built in python.
Yeah, everything that’s said here, I put my private and public subnets in configuration then use fromExisting
That said, I do it in our “base infrastructure” and export the VPC Id and subnets as outputs so the “child” stacks can recreate the VPC easier without their own configuration.