After doing a `pulumi up` with my `k8s.apps.v1.Dep...
# general
f
After doing a
pulumi up
with my
k8s.apps.v1.Deployment
, k8s (or Pulumi?) replaced all existing pods immediately instead of replacing in a one-by-one fashion, waiting for the readinessProbe to become true. Is this something I need to configure Pulumi to do?
c
@full-dress-10026 what do you mean when you say “replaced all existing pods immediately”? Do you mean the deployment was replaced or the rollout was not incremental
If the latter, that is managed b the k8s deployment controller, not pulumi. we have no ability to control it beyond what is exposed through the deployment API.
If the former, it could be a bug but we’d need to know more to say for sure.
f
I think the deployment gets entirely replaced.
Copy code
+-  ├─ kubernetes:apps:Deployment  model-updater-deployment     replace     [diff: ~metadata,spec]
 +-  └─ kubernetes:apps:Deployment  model-executor-deployment    replace     [diff: ~metadata,spec]
c
what did you change
like, what’s the diff
f
I changed metadata.labels.version and the container image. I can DM the diff if that'd help.
c
@full-dress-10026 yes please!
that should not trigger a replacement.
Just looping back, this triggered a replacement because
.spec.selector
is immutable. If you change that, ever, you must replace it. Kubernetes will reject any changes to this field.
👍 1
f
Thanks!