Hey everyone! I have some resources I need to crea...
# automation-api
v
Hey everyone! I have some resources I need to create/destroy on eventbridge events. I wrapped my component resources in the automation API, and I have it all working locally, but I need to host this in AWS now and I’m a bit stuck. I thought maybe I could run it from lambda, but that wasn’t working. Any recommendations for the easiest place to host this where it can be triggered by eventbridge? I’m not a pro at EC2 or containers, but I’m willing to learn. That said, the simpler the better in this case. I appreciate the help!
Any examples or resources would be really helpful too
m
In EventBridge you setup rules to filter events and target resources. You would figure out what these events look like in EventBridge, setup a rule to filter those events, and direct them to your resource targets. The easiest first step might be a rule targeting a lambda function that console.logs the event.
Here's an example of a rule that filters s3 CloudTrail events and targets an SQS queue (which targets the lambda (not shown))
Copy code
const eventBridgeRuleCloudTrailAction = new aws.cloudwatch.EventRule(`${appName}-cloudtrail-s3-action`, {
  description: "CloudTrail event filtered down to S3 on the asset buckets",
  eventPattern: pulumi
    .all([assetInputBucket.id, assetOutputBucket.id])
    .apply(([assetInputBucketName, assetOutputBucketName]) => {
      return JSON.stringify({
        source: ["aws.s3"],
        "detail-type": ["AWS API Call via CloudTrail"],
        detail: {
          eventSource: ["<http://s3.amazonaws.com|s3.amazonaws.com>"],
          eventName: [
            "CompleteMultipartUpload",
            "CopyObject",
            "DeleteObject",
            "DeleteObjects",
            "PutObject",
            "UpdateObject",
          ],
          requestParameters: {
            bucketName: [assetInputBucketName, assetOutputBucketName],
          },
        },
      });
    }),
});

new aws.cloudwatch.EventTarget(`${appName}-ebt-cloudtrail-action-sqs`, {
  rule: eventBridgeRuleCloudTrailAction.name,
  arn: cloudTrailEventQueue.arn,
  inputTransformer: {
    inputPaths: {
      detailType: "$.detail-type",
      eventName: "$.detail.eventName",
      id: "$.id",
      requestParameters: "$.detail.requestParameters",
      source: "$.source",
    },
    inputTemplate: `{
      "detail": {\
        "eventId": <id>,\
        "eventName": <eventName>,\
        "eventSource": <source>,\
        "messageType": "Event",\
        "requestParameters": <requestParameters>,\
        "version": "0"\
      },
      "detail-type": <detailType>,
      "id": <id>,
      "source": <source>
    }`,
  },
});
The target also contains an inputTransformer that enables basic reshaping of the event to normalize it with the expectations of our target's handler.
"I thought maybe I could run it from lambda, but that wasn’t working"
This should work, what wasn't working?
v
I appreciate the examples! I have the eventbridge triggers figured out for lambdas, but when I execute the lambda I’m getting the following error:
Copy code
ERROR	CommandError: code: -2
 stdout: 
 stderr: 
 err?: Error: spawn pulumi ENOENT

    at Object.createCommandError (/var/task/node_modules/@pulumi/automation/errors.ts:73:21)
    at ChildProcess.<anonymous> (/var/task/node_modules/@pulumi/automation/cmd.ts:86:27)
    at ChildProcess.emit (events.js:400:28)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:283:12)
    at onErrorNT (internal/child_process.js:472:16)
    at processTicksAndRejections (internal/process/task_queues.js:82:21) {
  commandResult: CommandResult {
    stdout: '',
    stderr: '',
    code: -2,
    err: Error: spawn pulumi ENOENT
        at Process.ChildProcess._handle.onexit (internal/child_process.js:277:19)
        at onErrorNT (internal/child_process.js:472:16)
        at processTicksAndRejections (internal/process/task_queues.js:82:21) {
      errno: -2,
      code: 'ENOENT',
      syscall: 'spawn pulumi',
      path: 'pulumi',
      spawnargs: [Array]
    }
  }
}
My lambda code works when I run it locally, and I think the only difference is that my local environment has access to the pulumi CLI, while the lambda environment doesn’t.
So I think I would need to figure out how to package the cli into lambda w/ layers or a docker image or something, or just figure out how to run pulumi as an ECS task, and set up my eventbridge trigger like that
m
You have two options: 1) Use a docker container as a runtime 2) Provide pulumi as a lambda-layer
The downside to using lambda layers is that a lambda function's total size, including the size of your lambda layers, is limited to 250MB (IIRC). Docker runtimes don't have that limit, but you will no longer see your code in the AWS Console, and your deployment times might increase.
Docker runtime method:
Copy code
const lambdaImage = containerRepository.buildAndPushImage({
  dockerfile: "lambda.Dockerfile",
  env: {
    DOCKER_BUILDKIT: "1",
  },
});

const lambdaFunctionApi = new aws.lambda.Function(
  `${appName}-api`,
  {
    imageUri: lambdaImage,
    imageConfig: { commands: ["lambdaApiHandler.handler"] },
    ...
  },
  { dependsOn: [applicationRole, lambdaImage] },
);
(
lambda.Dockerfile
example)
Copy code
FROM public.ecr.aws/lambda/nodejs:16

ARG FUNCTION_DIR="/var/task"

RUN mkdir -p ${FUNCTION_DIR}

COPY dist/app ${FUNCTION_DIR}
COPY dist/lambda-layer/nodejs/node_modules ${FUNCTION_DIR}/node_modules
Lambda layer method:
Copy code
const lambdaLayer = new aws.lambda.LayerVersion(`${appName}-lambda-layer`, {
  code: new pulumi.asset.FileArchive("./dist/lambda-layer"),
  compatibleRuntimes: [aws.lambda.NodeJS12dXRuntime],
  layerName: `${appName}-lambda-layer`,
});

const lambdaFunctionApi = new aws.lambda.Function(
  `${appName}-api`,
  {
    code: new pulumi.asset.FileArchive("./dist/app"),
    handler: "lambdaApiHandler.handler",
    layers: [lambdaLayer.arn],
    ...
  },
  { dependsOn: [applicationRole, lambdaLayer] },
);