After doing a `pulumi up` with my `k8s.apps.v1.Dep...
# general
After doing a
pulumi up
with my
, k8s (or Pulumi?) replaced all existing pods immediately instead of replacing in a one-by-one fashion, waiting for the readinessProbe to become true. Is this something I need to configure Pulumi to do?
@full-dress-10026 what do you mean when you say “replaced all existing pods immediately”? Do you mean the deployment was replaced or the rollout was not incremental
If the latter, that is managed b the k8s deployment controller, not pulumi. we have no ability to control it beyond what is exposed through the deployment API.
If the former, it could be a bug but we’d need to know more to say for sure.
I think the deployment gets entirely replaced.
Copy code
+-  ├─ kubernetes:apps:Deployment  model-updater-deployment     replace     [diff: ~metadata,spec]
 +-  └─ kubernetes:apps:Deployment  model-executor-deployment    replace     [diff: ~metadata,spec]
what did you change
like, what’s the diff
I changed metadata.labels.version and the container image. I can DM the diff if that'd help.
@full-dress-10026 yes please!
that should not trigger a replacement.
Just looping back, this triggered a replacement because
is immutable. If you change that, ever, you must replace it. Kubernetes will reject any changes to this field.
👍 1