https://pulumi.com logo
a

acoustic-leather-88378

07/11/2019, 12:00 PM
Hi all, this might be a potentially silly question but thought I'd get some feedback anyway. Given the example of a Kubernetes
ReplicaSet
as part of a Pulumi managed deployment, what are the current patterns people are using to handle the updating of the `replica`'s value and the fact that it has now diverged from the initial state. I.e. Post stack deployment, a developer scales (legitimately) the replicas, meaning the stack state and the deployment state has diverged. If I ran
pulumi up
again, that
replica
value would be reverted back to it's initial value I assume. If the number of replicas being updated externally is a valid use case, how are teams handling the fact that certain properties should not be reverted to the stack state once the stack is run again? I guess this leads to a more general question around how (if it all) to handle valid/allowed changes to the managed resource directly and how/if the stack is then "back updated". Is the advice to simply not do/allow that but instead ignore any ephemeral changes and encourage always updating the stack instead? Any feedback appreciated...
s

salmon-account-74572

07/11/2019, 2:53 PM
This question isn't specific to Pulumi; it can apply to using native Kubernetes tools as well (i.e., using
kubectl edit
to edit a Deployment to scale the replicas up, then someone re-applying the original manifest and undoing the change). The key is exactly as you suggested: you have to control how/where changes to the environment are made so there is a single "source of truth" for the desired configuration.
g

gorgeous-egg-16927

07/11/2019, 4:40 PM
You can also use
pulumi refresh
to reconcile Pulumi’s state with the state of the world if something changes out of band like that.
a

acoustic-leather-88378

07/11/2019, 7:41 PM
@gorgeous-egg-16927 Interesting, will check
refresh
out, thanks