Hi all, I'm attempting to reference a target group...
# getting-started
m
Hi all, I'm attempting to reference a target group from Stack A in an ALB in Stack B. The creation of Stack A fails with this error:
Copy code
creating ECS Service (pmb-main-fargate-main): InvalidParameterException: The target group with targetGroupArn arn:aws:elasticloadbalancing:us-west-2:851404744550:targetgroup/pmb-main-tg-main-a72055f/602533dbf27bf9ae does not have an associated load balancer.
Is this a Fargate Limitation that is impossible to work around? or is there some option I'm not finding in the docs. The workaround appears to create the TG first, then link it to an ALB in a different stack, then create the Fargate service linking to the TG but... that seems extremely janky
d
I think this is an aws limitation. Is it possible for you to setup the Listener in Stack A instead of stack B? Or perhaps have Stack B export the target group arn for use in Stack A by the service?
m
hmm, yeah its a pickle. creating the listener requires that I have the ALB ARN right?
d
Correct
m
that ALB is being created in a downstream stack and I'm trying to avoid circular dependencies
d
It's common to have the networking layer (ie VPC, LBs) setup before services. I can't think of a workaround. I'm sure AWS has reasons for this extra validation; perhaps to make sure TG healthchecks have a machine to run from.
m
That seems a little backwards... so I'd be importing the ALB into all my other stacks - not the other way around?
or i guess the ALB arn i need to reference with the listeners
d
Yes. You could also have the ALB stack setup listeners + target groups, and export the TGs
m
gotcha; okay let me give that a shot; it does make sense conceptually
quick followup
lets say I have ALB with a listener on port 80; it looks like i need to keep that listener in the same stack as the ALB and export it; what defaults do I set for it so I can add rules to it later on?
it seems to require a defaultAction which is the fallthrough
d
You could have it forward to the TG. Or if you know it should never actually happen, and only the subsequent rules will handle traffic, give a fixed response of 404
m
oh yeah! okay
d
Though with port 80, you'd normally do a redirect to https port 443 ;)
m
normally 🙂
d
m
I'm trying to get our dev/test/stg environments going
tyvm!
here's a silly question...
should an external-facing ALB for web traffic be associated to public or private subnets ☠️
d
Public subnet if you want it reachable from the Internet
The fargate service itself can be attached to the private subnets though
m
im struggling here 😂 so I've got
Copy code
const vpc = awsx.classic.ec2.Vpc.getDefault();
and I can't seem to get the public subnets out of it to place into:
Copy code
const alb = new awsx.lb.ApplicationLoadBalancer(pre('main-web-alb'), {
    name: pre('web-traffic-alb'),
    idleTimeout: 360,
	securityGroups: [securityGroup.id],
	listener: {
		port: 873
	},
	subnets: vpc.publicSubnetIds
});
been digging around 15-30min now, types dont appear compatible
cant seem to get the following type out of awsx.classic.ec2.Vpc:
Copy code
Input<Input<Subnet>[]>
😂
d
I think the awsx package takes care of subnets for you, and will default to using the default vpc public subnets
So no need to specify them
m
hm. having a reachability issue and out of ideas besides that
this rule (in the stack where my fargate service is) is reachable from web, added it for debugging.
Copy code
const envCheckRule = new aws.lb.ListenerRule(pre('envCheckListener'), {
    conditions: [{
		pathPattern: {
			values: ["/environment-check*"],
		},
    }],
    actions: [{
        type: "fixed-response",
        fixedResponse: {
            contentType: "text/plain",
            statusCode: "200",
            messageBody: pulumi.interpolate`Check OK - ENV ${getStack()}`,
        },
    }],
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),
    priority: 50,  // We'll have the onboarding listener be a higher priority.
});
but for some reason this rule:
Copy code
const serviceListenerRule = new aws.lb.ListenerRule(pre('mainServiceListener'), {
    conditions: [{
		pathPattern: {
			values: ["/*"],
		},
    }],
    actions: [{
        type: "forward",
        targetGroupArn: mainTargetGroup.arn,
    }],
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),
    priority: 500,  // We'll have the onboarding listener be a higher priority.
});
does not forward to my fargate service despite this config:
Copy code
const pmbMainFargateService = new awsx.ecs.FargateService(pre('fargate-main'), {
    cluster: pmbServiceCluster.arn,
    name: pre('fargate-main'),
    assignPublicIp: true,
    healthCheckGracePeriodSeconds: 420,
    desiredCount: 1,
    taskDefinitionArgs: {
        logGroup: {
            existing: {
                name: pmbServiceMainLogGroup.name
            }
        },
        containers: {
            pmbWebApp: {
                name: "pmbWebApp",
                image: pmbMainDockerImage.repoDigest,
                cpu: 1024, //1024 = 1 VCPU
                memory: 2048, // MiB
                essential: true,
                healthCheck: {
                    interval: 25,
                    retries: 10,
                    command: ["CMD-SHELL", `ps x | grep puma | grep 0.0.0.0:${pmbMainAppPort}`]
                },
                portMappings: [{ 
                    targetGroup: mainTargetGroup,
                    containerPort: pmbMainAppPort,
                    hostPort: pmbMainAppPort,
                }],
TG is
Copy code
const mainTargetGroup = new aws.lb.TargetGroup(pre('tg-main'), {
	port: pmbMainAppPort,
    protocol: "HTTP",
    targetType: "ip",
    vpcId: vpc.id,
	healthCheck: {
		interval: 30,
		timeout: 25,
		matcher: "200,304",
		path: "/favicon.ico",
		unhealthyThreshold: 10
	},
});
i have a feeling i messed up the ports somewhere but
been staring at this for too long ☠️
oh god 🤦‍♂️ i didn't specify the port on the target group
d
Haha, yeah, that'd do it.
m
ok, hopefully last silly question
i got a good ole s3 bucket configured as static web. bucket.webSiteEndpoint = visible from web. this rule?
Copy code
const listenerRule = new aws.lb.ListenerRule(pre('listener-s3'), {
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),  // Replace with your ALB listener ARN
    priority: 75,
    actions: [{
        type: "redirect",
        redirect: {
            protocol: "HTTP",  // or HTTPS if your bucket is set up with a custom domain and SSL
            port: "80",  // or 443 for HTTPS
            host: bucket.websiteEndpoint,
            path: "/#{path}",
            query: "#{query}",
            statusCode: "HTTP_301",
        },
    }],
    conditions: [{
		pathPattern: {
			values: ["/onboarding*"],
		},
    }]
});
403
d
Is the 403 from your application, or are you seeing the redirect first?
m
ah it was a bad bucket policy
weird
went from 403 to 404 lmao
Copy code
404 Not Found
Code: NoSuchKey
Message: The specified key doe
NoSuchKey
d
It can sometimes take some time to propagate rules. NoSuchKey sounds like it's an s3 error. There'll be headers that give it away. It's worth looking at request/responses with curl in verbose mode (
-v
)
m
hm
not much clue there; gives me host ip and a few other things but nothing telling
d
Is there a redirect shown before the 404?
m
nope, just two 404s from different IPs
one is just not found, one is not found from disk cache
oh, hm
main ALB seems to be requesting /onboarding when it should be requesting /
from the s3 bucket
oh, there we go
now the redirect works
is there any way to fetch it through the ALB under the originally requested URL?
or do i need a CDN distro for that
also i cant tell you how much i appreciate the input lol
d
Without redirects, you'll need to run a server to proxy the bucket. I used to use nginx for it in the past. I've had a good experience with Caddy, which has a simple reverse-proxy command. But the functionality isn't built into ALB
main ALB seems to be requesting /onboarding when it should be requesting /
ALB also doesn't support rewriting the path like this. You can add to it, but not remove parts. So a server is also required here, or upload the files under the
/onboarding/
prefix
Or yes, setup Cloudfront to distribute between the bucket and your ALB.
m
okie ima try the CDN route
weird; redirects work but i cant get the ALB to forward to anything
fargate is still not web reachable ☠️
d
Worth checking the health status on the target group, any logs in cloudwatch, and status checks in the ecs console. It's worth checking the security group rules attached to the ecs service as well
m
dug through the healthchecks and all logs. servers up, health checks green; what kind of SG rule would allow me to directly access the fargate service from a URL but not from an ALB?
or is assignPublicIp on my fargate the whole problem
d
Hmm. I've never tried it, but you'd just send the http request to the assigned ip address
Check the SG attached to the ecs service from the console, do the Ingress rules look correct on it?
m
i dont have an SG attached to the service. i think i found the problem lol
it was previously just set public via the assignPublicIp
d
Public IP means you just get an Internet facing IP address. Still requires the SG to apply firewall rules
m
security group just needs to allow the ALB itself right
d
Yep
m
weird
https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html
Copy code
The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used.
apparently ECS supports this but having trouble finding it in pulumi
welp, found it in network config
oh; it seems that serving s3 with a lambda function is one of the more painless approaches
dang man; I can't get this ALB to reach anything lmfao
the only thing it returns is fixed-response
d
What do the Listener rules look like in the console?
m
port80 listener
d
And you're getting the 404 fixed response?
m
i set my fixed responses to be 200 with a message and getting both of them
each rule is in a different stack
so they're all being attached properly. just... cant figure out what im doing wrong
d
What does the curl command look like (excluding domain)?
m
curl directly to the ALB?
d
However you're testing it to get the fixed response
m
image.png
d
That looks correct
503 suggests the backend is down. Are the health checks passing on the target group?
m
they were before - not at the moment.
working on like 5 repos at once to not just be waiting for 3-6 min pulumi deploys - the main web app is down rn
i tried to start my fargate services with networkConfiguration but it wouldnt work even if i blew open the security groups for all ports / all cidr egress and ingress
but it works with assignPublicIp: true for some reason. one of the many mysteries right now 😓
d
There's additional setup to remove public ips, like having a NAT, and setting up private connect endpoints (or something, been a while) to let ECR be reachable
For simplicity, you can leave it public. The security group will keep it locked down
m
gotcha. the way ecs works is if you enable assignPublicIp, you cant assign security groups lol
d
.... Really?
m
...it does sound unlikely, right? but:
Copy code
assignPublicIp: true,
	// networkConfiguration: {
	// 	securityGroups: [sideKiqEcsSecurityGroup.id],
	// 	subnets: coreInfraStackRef.getOutput('privateSubnetIds'),
	// },
I literally get an ECS error saying i can use one or the other. its very clear
using awsx.ecs.FargateService
ok, now that webapp is up
target groups are healthy and ALB root went from 503 unavailable to 502 bad gateway
d
Ah, so there's a top-level assignpublicIPs, and one nested under networkConfiguration. You need to use the nested one
m
....
😂
redeploying
d
Remember to use the public subnet for public ips :)
m
i actually... dont even know how to get those. silly enough
Copy code
vpc.getSubnetsIds('public')
and everything else comes back blank
in the console i obviously have 1 public subnet per az
d
How are you exporting the private ids currently?
m
Copy code
const privateSubnets = [ // Do NOT touch CIDR block configuration.
    createPrivateSubnet("private-subnet-1", vpc, "64", "a"),
    createPrivateSubnet("private-subnet-2", vpc, "80", "b"),
    createPrivateSubnet("private-subnet-3", vpc, "96", "c"),
    createPrivateSubnet("private-subnet-4", vpc, "112", "d")
]
X____X
Copy code
vpc.getSubnetsIds('public');
says it only gets pub subnets from when vpc was created
and in my case it returns blank 😂
d
m
awsx.classic.ec2.Vpc.getDefault();]
i havent been able to get the default vpc with the other types
Tbh it's been a while since using aws. I remember using tags so I could look them up quickly, but that was to work around TF limitations
m
the
Copy code
new awsx.ec2.DefaultVpc()
requires a name and my default vpc doesnt have one in any of my envs
d
The name is just the pulumi resource name, for state tracking
It's not related to the names in aws
m
OH
🤦‍♂️
d
Don't worry, it confuses many people while they get to grips with Pulumi
m
omfg
that just made my life so much easier
oh hey
ALB now forwarding as it shoudl
d
It's also used to help name resources in aws as well if you don't have
name:...
Set in the Args object. It'll generate the name for you, with a random suffix. Helps with resource recreation
Sweet, glad it's working
m
my only other challenge now is to serve s3 with lambda under separate url 😂
can use lambda to do the URL rewrite too
d
You can code anything into lambda. I'd probably use nginx/caddy within ecs though, as then you're using a purpose built tool
m
hm... might be a better idea lol
lambda does not enjoy deploying apparently
Copy code
pulumi:pulumi:Stack (domain-user-development):
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xe05f6e]
d
Can you share the code? This should never happen
m
Copy code
const s3ReadRole = new aws.iam.Role("s3ReadRole", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [{
            Action: "sts:AssumeRole",
            Principal: {
                Service: "<http://lambda.amazonaws.com|lambda.amazonaws.com>"
            },
            Effect: "Allow",
            Sid: ""
        }]
    })
});

const lambdaLogGroup = new aws.cloudwatch.LogGroup(pre('lambda-log'), {
	retentionInDays: 60
})

const lambdaLoggingPolicyDocument = aws.iam.getPolicyDocument({
    statements: [{
        effect: "Allow",
        actions: [
            "logs:CreateLogGroup",
            "logs:CreateLogStream",
            "logs:PutLogEvents",
        ],
        resources: ["arn:aws:logs:*:*:*"],
    }],
});
const lambdaLoggingPolicy = new aws.iam.Policy(pre('lambda-log-policy'), {
    path: "/",
    description: "IAM policy for logging from a lambda",
    policy: lambdaLoggingPolicyDocument.then(lambdaLoggingPolicyDocument => lambdaLoggingPolicyDocument.json),
});

const lambdaExecutionRole = new aws.iam.Role("lambdaExecutionRole", {
    assumeRolePolicy: JSON.stringify({
        Version: "2012-10-17",
        Statement: [{
            Action: "sts:AssumeRole",
            Principal: {
                Service: "<http://lambda.amazonaws.com|lambda.amazonaws.com>"
            },
            Effect: "Allow",
            Sid: ""
        }]
    })
});

const lambdaLogs = new aws.iam.RolePolicyAttachment(pre('lambda-log-attach'), {
    role: lambdaExecutionRole.name,
    policyArn: lambdaLoggingPolicy.arn,
});

const lambdaSecurityGroup = new aws.ec2.SecurityGroup(pre('lambda-sg'), {
	vpcId: vpc.vpcId,
    description: "Security group for domain-user proxy lambda",
    egress: [
        {
            protocol: "-1",  // Allow all outbound traffic
            fromPort: 0,
            toPort: 0,
            cidrBlocks: ["0.0.0.0/0"],
        }
    ],
});

const lambdaTargetGroup = new aws.lb.TargetGroup(pre('tg-lambda'), {
    protocol: "HTTP",
    targetType: "lambda",
	port: 80,
    vpcId: vpc.vpcId
});

const proxyLambda = new aws.lambda.Function(pre('s3-lambda'), {
    runtime: aws.lambda.Runtime.NodeJS18dX,
    handler: "index.handler",
		vpcConfig: {
			subnetIds: vpc.privateSubnetIds,
			securityGroupIds: [lambdaSecurityGroup.id]
		},
    code: new pulumi.asset.AssetArchive({
        "index.js": new pulumi.asset.StringAsset(`
            const AWS = require('aws-sdk');
            const s3 = new AWS.S3();

            exports.handler = async (event) => {
				console.log('Starting Execution');
                const path = event.path.substring(1); // Remove the leading '/'

                try {
                    const s3Response = await s3.getObject({
                        Bucket: "${bucket.bucket}",
                        Key: path,
                    }).promise();

                    return {
                        statusCode: 200,
                        body: s3Response.Body.toString(),
                        headers: { "Content-Type": "text/plain" }, // Adjust based on your content
                    };
                } catch (error) {
                    return {
                        statusCode: 500,
                        body: "Internal Server Error",
                    };
                }
            };
        `),
    }),
    role: s3ReadRole.arn,
}, {
	dependsOn: [lambdaLogs, lambdaLogGroup, lambdaTargetGroup, lambdaLoggingPolicy, lambdaExecutionRole, lambdaLogs, lambdaSecurityGroup, s3ReadRole]
});

const lambdaElbPermission = new aws.lambda.Permission(pre('elb-permission'), {
    action: "lambda:InvokeFunction",
    function: proxyLambda.name,
    principal: "<http://elasticloadbalancing.amazonaws.com|elasticloadbalancing.amazonaws.com>",
    // If you're using an Application Load Balancer, you might also need to specify the source ARN:
    sourceArn: lambdaTargetGroup.arn
});
const lambdaS3Permission = new aws.lambda.Permission(pre('s3-permission'), {
    action: "lambda:InvokeFunction",
    function: proxyLambda.name,
    principal: "<http://s3.amazonaws.com|s3.amazonaws.com>",
    // If you're using an Application Load Balancer, you might also need to specify the source ARN:
    sourceArn: lambdaTargetGroup.arn
});

const lambdaTarget = new aws.lb.TargetGroupAttachment("lambdaTarget", {
    targetGroupArn: lambdaTargetGroup.arn,
    targetId: proxyLambda.arn,
}, { dependsOn: [lambdaTargetGroup, lambdaElbPermission, lambdaS3Permission]});

const listenerRule = new aws.lb.ListenerRule(pre('listener-s3'), {
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),
    priority: 75,
    actions: [{
        type: "forward",
		targetGroupArn: lambdaTargetGroup.arn
    }],
    conditions: [{
		pathPattern: {
			values: ["/onboarding*"],
		},
    }]
});
tried with/without dependsOn and a bunch of other stuff
no matter what i change in the file for some reason it goes for creating the lambda first
d
There's a dependency loop between the TG and the function
m
huh. commenting out the vpcConfig resolved it
how would i work around the loop, if possible?
odd
d
Sorry, I misread it. There isn't.
m
if i comment out the vpcConfig it just says max number of target groups reached
if the vpcConfig is uncommented it is a consistent kernel panic in go
ok thats strange...
if i look in console while its updating, the target group lists a lambda function as its target which doesnt exist.
yeah the TG is pointing to a function that no longer exists while a different one is there
i have no idea what to do 😞 nginx time haha
d
does it work with
vpcId: vpc.vpcId,
added?
m
testing now
Copy code
aws:lambda:Function (domain-user-s3-lambda):
    error: aws:lambda/function:Function resource 'domain-user-s3-lambda' has a problem: Value for unconfigurable attribute. Can't configure a value for "vpc_config.0.vpc_id": its value will be decided automatically based on the result of applying this configuration.. Examine values at 'domain-user-s3-lambda.vpcConfig.vpcId'.
d
ah ok, so it'll derive it based on the subnets
m
hm
what is the purpose of us being able to supply that?
huh
this works
Copy code
const lambdaSecurityGroup = new aws.ec2.SecurityGroup(pre('lambda-sg'), {
	vpcId: vpc.vpcId,
    description: "Security group for domain-user proxy lambda",
    egress: [
        {
            protocol: "-1",  // Allow all outbound traffic
            fromPort: 0,
            toPort: 0,
            cidrBlocks: ["0.0.0.0/0"],
        }
    ],
});

const lambdaTargetGroup = new aws.lb.TargetGroup(pre('tg-lambda'), {
    protocol: "HTTP",
    targetType: "lambda",
	port: 80,
    vpcId: vpc.vpcId
});

const proxyLambda = new aws.lambda.Function(pre('s3-lambda'), {
    runtime: aws.lambda.Runtime.NodeJS18dX,
    handler: "index.handler",
	// vpcConfig: {
	// 	subnetIds: vpc.privateSubnetIds,
	// 	securityGroupIds: [lambdaSecurityGroup.id],
	// },
    code: new pulumi.asset.AssetArchive({
        "index.js": new pulumi.asset.StringAsset(`
            const AWS = require('aws-sdk');
            const s3 = new AWS.S3();

            exports.handler = async (event) => {
				console.log('Starting Execution');
                const path = event.path.substring(1); // Remove the leading '/'

                try {
                    const s3Response = await s3.getObject({
                        Bucket: "${bucket.bucket}",
                        Key: path,
                    }).promise();

                    return {
                        statusCode: 200,
                        body: s3Response.Body.toString(),
                        headers: { "Content-Type": "text/plain" }, // Adjust based on your content
                    };
                } catch (error) {
                    return {
                        statusCode: 500,
                        body: "Internal Server Error",
                    };
                }
            };
        `),
    }),
    role: s3ReadRole.arn,
}, {dependsOn: [lambdaSecurityGroup, lambdaTargetGroup]});
even if it depends on those two, as long as the vpcConfig is commented out, it works
d
Lets check the subnet IDs are populated:
Copy code
subnetIds: vpc.privateSubnetIds.apply(ids => {
            console.log(ids);
            return ids;
        })
m
checking
up, populated - 4 of em - and all valid.
Copy code
+ subnetIds     : [
  +     [0]: "subnet-00ff847521ecd9d36"
  +     [1]: "subnet-04b5574c704aadda3"
  +     [2]: "subnet-08bab0e8ac3185463"
  +     [3]: "subnet-0a00949490c736325"
    ]
d
I'm not sure then. It could be the execution role is lacking permissions to configure the ENI. However, proxying requests from the bucket with lambda is likely more involved; such as handling the content-type header correctly
m
that is what i saw from console
i tried to add the vpc config via console and it said role lacks permissions
trying to triage that now
d
so I think that's it; however in my case it just hangs, no segfault
m
yup ive added this to the execution role:
Copy code
const networkInterfacePolicyDocument = aws.iam.getPolicyDocument({
    statements: [{
        effect: "Allow",
        actions: [
            "ec2:CreateNetworkInterface",
            "ec2:DescribeNetworkInterfaces",
			"ec2:DeleteNetworkInterface",
        ],
        resources: ["*"],
    }],
});
its 180sec into creating so far, but in AWS the resource exists with the proper vpc config ...but it still says this:
im not sure if thats because pulumi hung on some part of the resource spec or if its AWS
created successfully.... took 200sec
d
yep, can take a while to create
m
huh
some weirdness going on
d
this should be reported, that when using vpcConfig without the required permissions on the role that creation just hangs
m
yeah ill file it
okay, lambda now deployed, vpc linked, still getting 503 on the ALB
last thing to figure out 😄
d
simplified replication:
Copy code
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";

const vpc = new awsx.ec2.DefaultVpc("vpc");

const lambdaSecurityGroup = new aws.ec2.SecurityGroup('func', {
    vpcId: vpc.vpcId,
    description: "Security group for domain-user proxy lambda",
    egress: [
        {
            protocol: "-1",  // Allow all outbound traffic
            fromPort: 0,
            toPort: 0,
            cidrBlocks: ["0.0.0.0/0"],
        }
    ],
});


const role = new aws.iam.Role('func', {
    assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({ Service: "<http://lambda.amazonaws.com|lambda.amazonaws.com>" }),
    managedPolicyArns: [
        aws.iam.ManagedPolicy.AWSLambdaBasicExecutionRole,
        // NOTE: With this commented out, creation of the lambda function hangs indefinitely
        // aws.iam.ManagedPolicy.AWSLambdaVPCAccessExecutionRole,
    ]
});

new aws.lambda.Function('func', {
    runtime: aws.lambda.Runtime.NodeJS18dX,
    handler: "index.handler",
    vpcConfig: {
        subnetIds: vpc.publicSubnetIds,
        securityGroupIds: [lambdaSecurityGroup.id],
    },
    code: new pulumi.asset.AssetArchive({
        "index.js": new pulumi.asset.StringAsset(`
            exports.handler = async (event) => {
                console.log('Starting Execution');
                return {
                    statusCode: 200,
                    body: "Hello, world!",
                    headers: { "Content-Type": "text/plain" },
                };
            };
        `),
    }),
    role: role.arn,
});
m
tyvm!
d
503 on what? the lambda?
m
from ALB yeah
Copy code
const listenerRule = new aws.lb.ListenerRule(pre('listener-s3'), {
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),
    priority: 75,
    actions: [{
        type: "forward",
		targetGroupArn: lambdaTargetGroup.arn
    }],
    conditions: [{
		pathPattern: {
			values: ["/onboarding*"],
		},
    }]
});
Copy code
const lambdaTargetGroup = new aws.lb.TargetGroup(pre('tg-lambda'), {
    protocol: "HTTP",
    targetType: "lambda",
	port: 80,
    vpcId: vpc.vpcId
});
const lambdaElbPermission = new aws.lambda.Permission(pre('elb-permission'), {
    action: "lambda:InvokeFunction",
    function: proxyLambda.name,
    principal: "<http://elasticloadbalancing.amazonaws.com|elasticloadbalancing.amazonaws.com>",
    // If you're using an Application Load Balancer, you might also need to specify the source ARN:
    sourceArn: lambdaTargetGroup.arn
});
d
You have 2 roles within your original lambda code, and attach
s3read
to the lambda. You need to end up with one role that has multiple policies instead
m
yep i fixed that
d
👍
m
heh
Copy code
const listenerRule = new aws.lb.ListenerRule(pre('listener-s3'), {
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),
    priority: 75,
    actions: [{
        type: "forward",
		targetGroupArn: lambdaTargetGroup.arn
    }],
    conditions: [{
		pathPattern: {
			values: ["/onboarding*"],
		},
    }]
});

const envCheckRule = new aws.lb.ListenerRule(pre('envCheckListener'), {
    conditions: [{
		pathPattern: {
			values: ["/domain-user-check*"],
		},
    }],
    actions: [{
        type: "fixed-response",
        fixedResponse: {
            contentType: "text/plain",
            statusCode: "200",
            messageBody: pulumi.interpolate`User Domain Check OK - ENV ${getStack()}`,
        },
    }],
    listenerArn: coreInfraStackRef.getOutput('port80ListenerArn'),
    priority: 49,  // We'll have the onboarding listener be a higher priority.
});
domain-user-check passes
i tried setting the SG to omni ingress/egress and it didnt help 😮
ALB's security groups are pretty loose too
d
any logs in lambda?
m
either i cant find em or it never got called 😂
d
here's some reference code that works, hope it helps with refining yours: https://gist.github.com/antdking/a40db115e3369e8bb162e541d617dfed
m
oh hey! i... uh... didn't attach the TG to the lambda
d
lol
m
getting logs / invocation / etc in CloudWatch and lambda and chugging along with the URL remapping.... its been a journey
d
There's a 1mb file size limit on the body response btw
m
this:
Copy code
const bucketString = bucket.bucket.apply(t => t);

const proxyLambda = new aws.lambda.Function(pre('s3-lambda'), {
    runtime: aws.lambda.Runtime.NodeJS18dX,
    handler: "index.handler",
	vpcConfig: {
		subnetIds: vpc.privateSubnetIds,
		securityGroupIds: [lambdaSecurityGroup.id],
	},
    code: new pulumi.asset.AssetArchive({
        "index.js": new pulumi.asset.StringAsset(`
            const AWS = require('aws-sdk');
            const s3 = new AWS.S3();

            exports.handler = async (event) => { // 3
				console.log('Starting Execution');
                const path = event.path.substring(1);

                try {
                    const s3Response = await s3.getObject({
                        Bucket: '${pulumi.interpolate`${bucketString}`}',
                        Key: path,
                    }).promise();

                    return {
                        statusCode: 200,
						statusDescription: "200 OK",
                        body: s3Response.Body.toString(),
                        headers: { "Content-Type": "text/plain" },
                    };
                } catch (error) {
                    return {
                        statusCode: 500,
                        body: "Internal Server Error",
                    };
                }
            };
        `),
    }),
    role: lambdaExecutionRole.arn,
}, {dependsOn: [lambdaExecutionRole, lambdaSecurityGroup, lambdaTargetGroup]});
gives me an invalid TS file because the bucket.bucket isnt parsed
i tried parsing it the way we did earlier
no bueno
d
Pass it in as an environment variable, or use
pulumi.interpolate
. Attributes on resources are wrapped as pulumi Outputs
m
I tried interpolate 😅
d
Needs to be on the outer string, not within the string
m
Copy code
Bucket: '${pulumi.interpolate`${bucket.bucket}`}',
?
oh nvm
i get it haha
ok this is a little embarassing ☠️
oop definitely found a bug
if you update a variable being passed into a StringAsset (even completely rename it) it does not update the lambda
that could get pretty nasty in cicd
does the string need to be evaluated to be converted or something?
okay please help.... 😭
Copy code
const transformedOutput = bucket.bucket.apply(val => val);
const templateOut = pulumi.interpolate`${transformedOutput}`;
let bucketOut = ''
templateOut.apply(s => { bucketOut = s; });

                        Bucket: '${bucketOut}',
still gets me a callback output
i've tried like every combination of interpolate and apply
even tried putting the entire proxy declaration inside the apply block. just get null
d
Try returning the StringAsset object from
bucket.bucket.apply
m
like so?
Copy code
const string = bucket.bucket.apply(s => { return s })
not seeing a StringAsset type
its still just output
it parses to this inside the template lol
Copy code
To get the value of an Output<T> as an Output<string> consider either:
1: o.apply(v => `prefix${v}suffix`)
2: pulumi.interpolate `prefix${v}suffix`

See <https://www.pulumi.com/docs/concepts/inputs-outputs> for more details.
This function may throw in a future version of @pulumi/pulumi.',
and neither of those seem to work unless im doing something very wrong
d
pulumi.asset.StringAsset
Actually that won't work. The entire ArchiveAsset object needs wrapping in
bucket.bucket.apply
m
i tried that haha
it's just skipped entirely
when i wrap literally all of this inside apply:
Copy code
const proxyLambda = new aws.lambda.Function(pre('s3-lambda'), {
    runtime: aws.lambda.Runtime.NodeJS18dX,
    handler: "index.handler",
	vpcConfig: {
		subnetIds: vpc.privateSubnetIds,
		securityGroupIds: [lambdaSecurityGroup.id],
	},
    code: new pulumi.asset.AssetArchive({
        "index.js": new pulumi.asset.StringAsset(`
            const AWS = require('aws-sdk');
            const s3 = new AWS.S3();

            exports.handler = async (event) => { // 16
				console.log('Starting Execution');
                const path = event.path.substring(1);

                try {
                    const s3Response = await s3.getObject({
                        Bucket: '${string}',
                        Key: path,
                    }).promise();

                    return {
                        statusCode: 200,
						statusDescription: "200 OK",
                        body: s3Response.Body.toString(),
                        headers: { "Content-Type": "text/plain" },
                    };
                } catch (error) {
                    return {
                        statusCode: 500,
                        body: "Internal Server Error",
                    };
                }
            };
        `),
    }),
    role: lambdaExecutionRole.arn,
}, {dependsOn: [lambdaExecutionRole, lambdaSecurityGroup, lambdaTargetGroup]});
and assign it outward with a "let"
that variable is null.
d
The code you sent doesn't align with what you described
m
i undid it, let me re-wrap
this:
Copy code
let proxyLambda!: aws.lambda.Function;

bucket.bucket.apply(s => {
	proxyLambda = new aws.lambda.Function(pre('s3-lambda'), {
		runtime: aws.lambda.Runtime.NodeJS18dX,
		handler: "index.handler",
		vpcConfig: {
			subnetIds: vpc.privateSubnetIds,
			securityGroupIds: [lambdaSecurityGroup.id],
		},
		code: new pulumi.asset.AssetArchive({
			"index.js": new pulumi.asset.StringAsset(`
				const AWS = require('aws-sdk');
				const s3 = new AWS.S3();
	
				exports.handler = async (event) => { // 16
					console.log('Starting Execution');
					const path = event.path.substring(1);
	
					try {
						const s3Response = await s3.getObject({
							Bucket: '${s}',
							Key: path,
						}).promise();
	
						return {
							statusCode: 200,
							statusDescription: "200 OK",
							body: s3Response.Body.toString(),
							headers: { "Content-Type": "text/plain" },
						};
					} catch (error) {
						return {
							statusCode: 500,
							body: "Internal Server Error",
						};
					}
				};
			`),
		}),
		role: lambdaExecutionRole.arn,
	}, {dependsOn: [lambdaExecutionRole, lambdaSecurityGroup, lambdaTargetGroup]});
})
just makes the whole object never get evaluated
d
Yep, avoid making resources in Apply. Won't go well
m
so how do I get the actual value to inject into a string template 😭
d
I mean around the
code
parameter
m
works
Copy code
const proxyLambda = new aws.lambda.Function(pre('s3-lambda'), {
    runtime: aws.lambda.Runtime.NodeJS18dX,
    handler: "index.handler",
	vpcConfig: {
		subnetIds: vpc.privateSubnetIds,
		securityGroupIds: [lambdaSecurityGroup.id],
	},
    code: bucket.bucket.apply(s => { return new pulumi.asset.AssetArchive({
        "index.js": new pulumi.asset.StringAsset(`
            const AWS = require('aws-sdk');
            const s3 = new AWS.S3();

            exports.handler = async (event) => { // 18
				console.log('Starting Execution');
                const path = event.path.substring(1);

                try {
                    const s3Response = await s3.getObject({
                        Bucket: '${s}',
                        Key: path,
                    }).promise();

                    return {
                        statusCode: 200,
						statusDescription: "200 OK",
                        body: s3Response.Body.toString(),
                        headers: { "Content-Type": "text/plain" },
                    };
                } catch (error) {
                    return {
                        statusCode: 500,
                        body: "Internal Server Error",
                    };
                }
            };
        `)})}),
    role: lambdaExecutionRole.arn,
}, {dependsOn: [lambdaExecutionRole, lambdaSecurityGroup, lambdaTargetGroup]});
that... was not easy 😭
thank you for all the help btw; owe you like 12 e-beers