03/16/2023, 4:30 PM
Hi, a couple of quick questions about aws.lambda.LayerVersion. If I have zipped layer uploaded to S3, if I set the sourceCodeHash property when creating my layer, will this automatically update the layer on lambda (provided I make sure the hash is the new changed version) or does it still have to be triggered manually from the CLI? And what is the best way of generating this souceCodeHash? Is there a built in Pulumi property or method somewhere or do I have to use an external package?


03/16/2023, 4:50 PM
So two questions here: 1. Do I need to the CLI to update the lambda if the source code hash has changed or will AWS do it for me? 2. Is there a Pulumi method to define the source code hash (in other words create the hash for me)? So answer to 2 is no, but I think pretty much all of the languages we support (aside from YAML) has a built in method to do the base64 sha hashing. The answer to 1 is “I don’t know but I’m finding out now).
Done a bit of investigating and yes if you’re uploading to S3, you’ll need a way of triggering the new version of the layer to be created with the new version of the layer code. There are two ways you can do this: 1. Use the
input of the layer. If you use the Pulumi
resource, then this emits a source code hash that you can use. However, it seems that even without the code changing, the source code hash output keeps changing so you’ll get a new layer version every time you run an update. 2. The other way to do this is by versioning the S3 bucket and then passing the
output from the bucket object into the
input of the lambda layer. If you choose to use method 2, you’ll end up with something like this:
import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

const bucket = new aws.s3.Bucket("lambdaBucket", {
    versioning: {
        enabled: true

const bucketObject = new aws.s3.BucketObject("lambdacode", {
    source: new pulumi.asset.AssetArchive({
        ".": new pulumi.asset.FileArchive("./lambda")
    bucket: bucket

const lambdaLayer = new aws.lambda.LayerVersion("layer", {
    layerName: "piers-test",
    s3Bucket: bucket.bucket,
    s3Key: bucketObject.key,
    s3ObjectVersion: bucketObject.versionId,


03/16/2023, 6:42 PM
@brave-planet-10645 thanks for this, I think I've come up with a decent alternative solution. Using zx, I can create a pre pulumi script that gets a list of the files in the most recent commit, scans to see if the filepaths contain any layer files and if so, updates a json file containing hashes representing each layer. I can then pull the hashes from this file when creating my layers in pulumi and that way it should only update the code when necessary. Looks something like this using tsx in conjunction with zx:
#!/usr/bin/env tsx
import { $ } from "zx";
import fs from "fs";
import path from "path";
import crypto from "crypto";
import {
} from "@lambda/script/helper";
import { rootDir } from "@root/root-dir";

async function updateHashes() {
    const output = await $`git diff --name-only HEAD~1`;
    const updated = output.stdout.split("\n")
    const layers = getChildDirectories(srcLayerPath);
    const hashFile = fs.readFileSync(hashFilePath, "utf8");
    const hashes = JSON.parse(hashFile);

    for (const layer of layers) {
        const relPath = path.relative(rootDir, `${srcLayerPath}/${layer}`);

        if (updated.find((file) => file.startsWith(relPath))) {
            hashes[layer] = createHash();

    fs.writeFileSync(hashFilePath, JSON.stringify(hashes));

function createHash() {
    return crypto.createHash("sha256").update("base64");

actually I believe I may have to modify this approach as I think the hash needs to be generated from the zip file rather than just being random