So, I have a database migration `Job` that I want ...
# kubernetes
b
So, I have a database migration
Job
that I want to guarantee completes before the rest of my stack runs. I feel like the two Pulumi ways to do this are: 1) add a
dependsOn
to the rest of the items in the stack, or 2) put the rest of the stack in an `apply`d function off of one of the job's outputs. The first option is a lot of busywork & passing arguments around - but when I did the second, it now produces alarming previews that imply all my Kubernetes objects will be deleted & recreated with each deploy. I think what I'm looking for doesn't exist: Something like a function on
Resource
that looks like:
Copy code
myResource.andThen(() => { /* stuff that happens only after the resource is created */ });
Does such a function already exist? Is there a pattern to achieve this other than plumbing
dependsOn
down the call stack?
g
dependsOn
is the answer for now, but you also might want to consider using an
initContainer
for tasks like DB migration instead of a
Job
b
We are running multiple deployments that all depend on the same database, so an
initContainer
is not a great option. This also would require running the database migrations every time a pod restarted, which I wouldn't like very much. 🙂
Thanks for the reply! Is there a feature request open like what I proposed above? Do you think I should open one?
g
I don’t know of an open issue, but the pulumi/pulumi repo would be the place for it
If you do want to pursue the
initContainer
route, you could make the migration script idempotent and add it to all relevant Deployments. So the script would run each time, but would be a no-op if the version already matched (or similar)
We’re using something along those lines for our on-prem SaaS
b
It's idempotent, but I really want a LOT more control than that. If the migration fails for any reason, I really, really don't want my entire production infrastructure limping along repeatedly trying to migrate.
👍 1
Additionally, having it as its own
Resource
means I can run it on its own, if we have a tricky deploy.
b
we came to the same conclusion and wrote a dynamic provider for schema migrations