This message was deleted.
s
This message was deleted.
b
when you create a bucket object resource in your pulumi code you pass it a name. This name should be a repeatable name that can occur again on your next run for the same file - such as the file path or something like that. Then on your next run the only changes you should see are: 1. creating newly declared bucket objects (new paths) 2. replacing bucket object's whose content was modified 3. removal of no-longer-declared bucket object You shouldn't be seeing every bucket object get deleted on every deploy unless there is some key bit missing from your process
The key bit of information is the name you are providing to the resource, the key that pulumi uses to track the state of the file. If you want pulumi to do an update-in-place instead of a wipe-and-upload then you need to make sure that the name for a file that existed in the last run is the same in the current run.
l
so webpack hashes the file contents and appends the hash to the chunk name when outputing lazy chunks
this is done to break the browser cache when you deploy a new version of your app
b
Your pulumi name does not need to match the file name, they can be different
l
oh hmm. I was just following along with this example: https://github.com/pulumi/examples/tree/master/aws-ts-static-website
b
yea that is the usual way to do it. But in your case since cache invalidation is happening during the compilation process if you do it that way you'll always end up replacing everything, or at the very least re-naming everything
Have you looked at using cloudfront for cache invalidation instead of doing it in the compilation phase?
l
no. I'm not actually using AWS, I'm using GCP but the example seems relatively similar
maybe there is a way to do the same thing with cloud cdn
b
gotcha. So we use AWS. And we essentially do this for hosting static files: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/ and then cloud front can do cache invalidation: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html so maybe there is a similar GCP service?
then your deployment phase could just be an update-in-place
l
okay. thanks for the info, I'll take a look! 🙏
b
good luck!
Your usage isn’t bad either, it just means you’ll only see replaces and deletes but never updates. It should still be only acting on files that were actually modified, right?
Unless directories get hit by that hashing as well but idk how that webpack function works
l
directories aren't hashed its just the app build artifacts (js and css). I don't know if they will be exactly the same thing every time.
b
gotcha, I was thinking that if it was hashing the contents and appending that to the end of the filename deterministically than that hash would be the same if nothing had changed, so you would only see pulumi touch bucket objects that had modifications
l
yeah that is how it works. I'm just not 100% sure how deterministic the chunk naming is.
I just thought of one thing that might help me. Do you know the order in which resources will be updated? to all updates occur before any deletes are dispatched?
b
There is no set order - it is based on the dependency tree. You can use the
pulumi graph
command to see your tree. By default pulumi will try to do as many actions concurrently as it can, so order is entirely determined by which branch of your dependency tree you are looking at. Obviously if a replace must occur for a node on a branch of the tree that has children, than pulumi will first delete those children if it must. You do have some control over how actions are taken, for instance there is the
deleteBeforeReplace
argument. By default, if a resource must be replaced, it will try to create the new resource first before deleting the old one. But If you set
deleteBeforeReplace
to true than it will delete the old one before creating the new one.
🙏 1