The `DockerBuild.cacheFrom` docs say this: ``` /**...
# general
f
The
DockerBuild.cacheFrom
docs say this:
Copy code
/**
 * An optional CacheFrom object with information about the build stages to use for the Docker
 * build cache. This parameter maps to the --cache-from argument to the Docker CLI. If this
 * parameter is `true`, only the final image will be pulled and passed to --cache-from; if it is
 * a CacheFrom object, the stages named therein will also be pulled and passed to --cache-from.
 */
What does this mean "If this parameter is
true
, only the final image will be pulled and passed to --cache-from"?
w
The default behavior of
—cache-from
in docker is to only cache the final stage (if you are using multi-stage builds). So if you pass
true
here you get that behavior. But if you also want to cache other stages (which in general you do if you are sensitive to build caching performance), then you can specify stage names to cache in the CacheFrom object.
f
Hmm. I was hoping that would help work around this https://github.com/pulumi/pulumi-docker/issues/23. Have you guys discovered any workarounds for the time being?
l
Unfortunately. This appears to be an intrinsic way about how docker works (with pulumi not being in the mix). One option is to take control of docker externally to pulumi however you want. You then push/pull/etc. whenever you think it is right to do so. You then just use that image-id you've produced and provide that to pulumi.
We have a work item to 'sort of' improve things here in the future. specifically by potentially having a mode where we use the 'file system' as the source of truth. so if the FS doesn't change, we don't rebuild docker stuff
note: this is not proper docker semantics. because a docker build can and will produce new builds even if the FS stays the same.
so we'd need to make this something you could opt in/out of.
f
the FS stays the same
Does this take into account the last modified timestamps of files? Or simply the contents of the files?
l
Haven't fully designed it out. in general we use contents for hashing. but we could ahve a contents+timestamps approach if you wanted that
but then, you might have different timestamps on different machines, thus defeating the purpose
f
I'd prefer contents.
l
oh good then đŸ™‚
that's waht we already do for things lke Lambdas
to know that we don't have to reupload your node_modules dir every time
f
FWIW: skipping Docker builds when the FS hasn't changed will be a nice perf improvement. Right now running
pulumi up
with no changes takes 35s, mostly due to the Docker builds. If you could detect no FS changes and skip the Docker build, I bet that number would be closer to 10s.