if you refactor a large project into microstacks, ...
# general
if you refactor a large project into microstacks, and replace some Outputs with StackReferences, what happens when the dependency wants to make some breaking change? Previously pulumi would take care of sequencing such that a resource exists in 2 states, old & new, before dependants are switched over to new and the old resource is deleted.
Would you sort of treat each micro-stacks outputs as an API that can never break? eg. lets say one output is the connection details for a Kafka cluster, and you want to move providers (eg. confluent -> upstash), and then you have a bunch of other microstacks all referencing that output - what might you do?
The stack references exported from a stack are an API, with the normal contract rules that implies. If you want to change the meaning of an export, you need to change the meaning in all places that use it, at the same time. Since that's usually impractical, a common solution is to add new export names and values, and to deprecate the old ones rather than reuse them.
Another possibility, if your microstacks are sufficiently small, is to create an entirely new stack with a new name for the new values, and leave the old one as-is (and stale) until all uses of it are moved to the new stack.
That choice seems like a lot of work to me, though.
i think a nice idea is to co-ordinate deploys across multiple microstacks with cross-references, I think I even remember reading an article about being able to hook into the
pulumi up
process you could have a hook that runs while the resource being replaced is in the "in between" state (new resource created, old one yet to be deleted) and then kick off a deployment of dependants where they will use the new output value
anyhow, I think I'm sold on microstacks - they seem much easier to manage and even place alongside service source code. so i guess i will migrate to them and try to solve these kinds of issues as they arise