https://pulumi.com logo
#general
Title
# general
l

late-piano-64593

11/08/2023, 12:29 AM
Recently I hit an issue and wanted to know if there was any common patterns for this: We currently build a number of docker images using s3 caching and other args that are not exposed in pulumi/pulumi-docker. And during the
pulumi up
phase I only want to reference a local (or import via
docker load
) a image, then push it to ECR. I was unable to find a clear API in the pulumi/docker for doing a push of a existing image though. Would this better handled by
docker.getRemoteImage
+ pulumi/Command for the push? that feels a little hacky to me.
l

little-cartoon-10569

11/08/2023, 12:51 AM
I don't know what S3 caching is, but if the args aren't exposed via the Docker provider, then yes, you should use your own scripts to do this. My preference is to not build Docker images from Pulumi. Entangling your build and deployments like this causes problems. Maybe the code and the tools can handle it fine, but different developers will have different understandings and confusion will result. Treat building as building, and deploying as deploying.
l

late-piano-64593

11/08/2023, 12:54 AM
Yeah totally agree, so is there a Pulumi pattern to perform the
docker push
operations such that it will track the Image as a asset (check for updates on preview etc) ?
l

little-cartoon-10569

11/08/2023, 12:55 AM
The opposite: use RemoteImage as the resource, and it detects changes after your build pipeline pushes.
At least, that's my preference.
l

late-piano-64593

11/08/2023, 1:01 AM
Yeah I am not sure how to just perform that push with pulumi then.
build pipeline pushes
err my mistake I misread that
so my other question: who builds the remote registry that the builders use then? seems like you have to break up your infra into multi-stages then.
l

little-cartoon-10569

11/08/2023, 1:09 AM
Yes, you need multiple stages. Your infrastructure should be grouped into projects based on deployment frequency. If you intend to create a new registry for every image push, then you put your registry and your build-and-push code into one project. But since no one does that, you put your registry-creating code into your infrastructure-that-must-exist-in-order-to-build project, and you put your pull-image-and-start-container code into your app-deployment project.
l

late-piano-64593

11/08/2023, 1:14 AM
Totally agree with that design at scale. And its a really fair point to consider this a point to break up the infra into stacks/projects. It does seem far heavier weight compare to just supporting import+push I guess? I guess I should say: I feel like I shouldn't need a new pulumi stack to run
docker push
? I mean I could just be wrong though
l

little-cartoon-10569

11/08/2023, 1:16 AM
You definitely don't need it. It will work fine via the Docker provider, or via pulumi.Command. This is all just my preference 🙂 I don't ask Pulumi to run
npm publish
, and
docker push
is at exactly the same place in the SDLC, so.. this is how I do it.
l

late-piano-64593

11/08/2023, 1:17 AM
Thanks! Makes sense, I guess some resouces are easier to import from like FileAssets vs other things like docker images so it can be deceptive for which inputs can easily be treated like a external asset.
I guess in this case I equate the
docker push
with the creation of a AWS Lambda for example vs pushing a package to something like NPM hence my confusion.
l

little-cartoon-10569

11/08/2023, 1:20 AM
Yes, and it will work both ways. I have just found that there's less confusion and fewer people stepping on other people's toes when the separation between build and deploy happens after
docker push
, rather than before.