This message was deleted.
# general
s
This message was deleted.
b
is your CI-agent running in a container that maybe doesn't have access to the docker directory on the agent itself? You want to make sure your docker directory is being preserved across runs
Also I don't use Github Actions but it looks like there are some first class actions provided by docker that may help: https://github.com/marketplace?type=actions&query=docker
b
I've had success with building docker images (with layer caching) in github actions before, but my goal here is to build the image within Pulumi, since otherwise I'd need to create the ECR repository outside of Pulumi, and I'd like all resources for the deployment to be defined in Pulumi
b
So you may need to verify that whatever action you are using has access to the local docker directory on that instance? Or maybe you mount some pre-selected directory to be the docker directory and tell docker to use that instead via environment variables, so it looks the same place each run?
b
That makes sense -- it may be some issue with GitHub Actions and Pulumi not playing well together when it comes to the docker image cache. I'll look into it. Is there a way to tell Pulumi where to store images locally? (I'm using
repo.buildAndPushImage
)
b
well I think under the hood pulumi is just using the
docker
CLI so if docker supports an environment variable to specify the directory you could probably just set that and pulumi doesn't need to be aware of the difference
Here's a link, it looks like you'll need to write a config file into the container and then that config file can specify the new directory and then you use an environment variable to specify the config file https://docs.docker.com/engine/reference/commandline/cli/#change-the-docker-directory Of course if github actions lets you mount an external directory to docker's default directory than you might not need any of that additional configuration
👀 1
Did you see that
repo.buildAndPushImage
accepts
DockerBuild
arguments? Like this: https://www.pulumi.com/docs/reference/pkg/docker/image/#dockerbuild
b
Yep! I'm using
Copy code
const image = repo.buildAndPushImage({
  target: 'app',
  cacheFrom: { stages: ['deps', 'files', 'app'] },
  context: '../',
})
where the stages are the names of the targets in my multi-stage build (see dockerfile example above)
b
Gotcha. Are you using the pulumi-provided Github Action?
b
Yes:
Copy code
- name: Add Pulumi CLI
        uses: pulumi/action-install-pulumi-cli@v1

      - name: Install Pulumi CLI
        run: cd deploy && npm install

      - uses: pulumi/actions@v3
        with:
          command: up
          stack-name: ${{ env.STACK_NAME }}
          work-dir: deploy
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
b
Cool so maybe @broad-dog-22463 can clarify how they intend for you to utilize docker build caching
b
It looks like it's correctly caching from the remote repo when the digest matches, but not caching layers built locally. So if the dependencies don't change, it'll use the cached
deps
correctly, but then will end up building
files
and
app
twice still
q
I was poking at similar issues a while back and had some success with enabling the new I was poking at similar issues not too long ago, and had some success with enabling Buildx on GHA. Seems to cache things a bit more reliably. 🤷 Could try sticking this step before the Pulumi one:
Copy code
- uses: docker/setup-buildx-action@v1
  with:
    install: true
b
Thanks! I'll give it a try
To close the loop on this: I ended up using Nikhil's suggestion of adding
buildx
and also added some options to the
buildAndPushImage
call: In GitHub Actions workflow:
Copy code
- uses: docker/setup-buildx-action@v1
  with:
    install: true
    driver: docker
    buildkitd-flags: --debug
In Pulumi file:
Copy code
checkForBuildx()  // (see write-up linked below)

const image = repo.buildAndPushImage({
  target: 'app',
  cacheFrom: { stages: ['deps', 'files'] },
  extraOptions: ['--load'],
  args: { BUILDKIT_INLINE_CACHE: '1' },
})
Success! It was a bit of a struggle to get to this solution so I wrote it up in a bit more detail here for everyone's reference.
👍 1