Is there planned support for `gs://` URLs when usi...
# google-cloud
p
Is there planned support for
gs://
URLs when using
RemoteAsset
or
RemoteArchive
for the
BucketObject
source
?
w
We’ve actually historically seen quite limited usage of
RemoteAsset
- so great to hear you are interested in using it - would love to hear more about the use case. Definitely feel free to open an issue to support it. One slight challenge will be credentials - it isn’t immediately clear which credentials this should use.
p
That’s a good point. Funnily enough, in the last few hours the use case is not relevant anymore. It was for copying existing bucket objects (build artifacts) to use in production and staging stacks. These artifacts were built during a PR preview deployment. We just realized that Pulumi shouldn’’t be used to build artifacts, even though it supports it via
docker.Image
and
FileArchive
for
gcp.cloudfunctions.Function
. The reason being that these artifacts shouldn’t be deleted when re-running
pulumi up
. They’re not “persistent infrastructure” but more of a side-effect of a build and Pulumi should probably only be used for “persistent infrastructure”…
f
Depending on how you’re thinking about it, those build step might be good candidates for dynamic providers. This way, you can control when you want to re-build those artifacts as part of your Pulumi program and still reference them within the stack.
p
Interesting, could you expand? Our usecase is the following CI/CD workflows: workflow 1: on pull request - test - build (create artifact) - deploy ephemeral environment workflow 2: on merge to master - deploy staging (use artifact from previous workflow) - deploy production manually (use same artifact as before) Also, locally one should be able to run
pulumi up
and deploy any existing artifact to roll back, for example.
f
Ah… I see. Not quite what I had in mind in terms of what you meant by not re-building the artifact. I wasn’t thinking so much of it from stack-to-stack as more in terms of not wanting to run the build step as part of executing the Pulumi program itself (within a single stack)
In this case, yeah, that doesn’t work so well since you can’t reference from staging to whatever artifact you created in the ephemeral stack.
p
@white-balloon-205, have you seen any companies use Pulumi for artifacts (Docker images, zip archives, etc?) in CI/CD? I see use of
docker.Image
a lot in the documentation so it would seem that it’s recommended, but it stays pretty high level and never goes into how one would manage multiple versions of these images. Just interpolating the version wouldn’t work since Pulumi would delete the older version when running
pulumi up
, no?
c
@prehistoric-account-60014 did you find a solution for this? Having the same thought process right this minute...
p
What I ultimately decided was to use Pulumi for “fixed” deployments that we wanted to track in a Pulumi stack and that would be changed over time. With artifacts you really want to just upload them and then never touch them, they’re supposed to be immutable. If you used Pulumi for that, a
pulumi up
with a different version would destroy the artifact. So our decision was to avoid using Pulumi for artifacts and instead only use it for deployments (i.e., something that should change when you run
pulumi up
, a declarative representation of some state the infrastructure should be in).
It would be nice if @white-balloon-205 or anybody else from the Pulumi team could chime in if he’s seen companies use companies use Pulumi to manage multiple versions of a Docker image
w
There are a few different topics covered in this thread - curious which piece are you each interested in @curved-ghost-20494 @prehistoric-account-60014?
Just interpolating the version wouldn’t work since Pulumi would delete the older version when running 
pulumi up
, no?
No - this shouldn't be true - though maybe I don't understand exactly the scenario you have in mind?
docker.Image
doesn't delete images currently, it only ever creates them.
With artifacts you really want to just upload them and then never touch them, they’re supposed to be immutable.
It is definitely valid to want to separate the process of producing artifacts from the
pulumi
deployment itself. You can then couple then use deltas to the Pulumi deployment to mark the deployment of a specific artifact, but version the artifacts out of band. That said, for docker Images in particular, it should be possible to push these without losing immutable artifact deployments.
c
I suppose what I’m after is for my infra deployment system to detect changes in my “app” directory. Which is in a monolith repo. Create a new socket image with a version, and push that image to GCR. My cloud run infra would be updated to use that image.
(This is all running in Github actions)
I suppose I could configure it to create a new Docker image on very deploy to achieve this without needing to detect changes in my app folder
p
Thanks for the reply @white-balloon-205. Good to know
docker.Image
doesn’t delete the image. In our case we were also using GCS for artifacts and
gcp.storage.BucketObject
does get deleted on
pulumi up
. In this case the artifacts were GCP cloud function source zip archives
👍 1