https://pulumi.com logo
#general
Title
# general
f

full-dress-10026

09/13/2019, 6:19 PM
After doing a
pulumi up
with my
k8s.apps.v1.Deployment
, k8s (or Pulumi?) replaced all existing pods immediately instead of replacing in a one-by-one fashion, waiting for the readinessProbe to become true. Is this something I need to configure Pulumi to do?
c

creamy-potato-29402

09/13/2019, 6:22 PM
@full-dress-10026 what do you mean when you say “replaced all existing pods immediately”? Do you mean the deployment was replaced or the rollout was not incremental
If the latter, that is managed b the k8s deployment controller, not pulumi. we have no ability to control it beyond what is exposed through the deployment API.
If the former, it could be a bug but we’d need to know more to say for sure.
f

full-dress-10026

09/13/2019, 6:24 PM
I think the deployment gets entirely replaced.
Copy code
+-  ├─ kubernetes:apps:Deployment  model-updater-deployment     replace     [diff: ~metadata,spec]
 +-  └─ kubernetes:apps:Deployment  model-executor-deployment    replace     [diff: ~metadata,spec]
c

creamy-potato-29402

09/13/2019, 6:24 PM
what did you change
like, what’s the diff
f

full-dress-10026

09/13/2019, 6:26 PM
I changed metadata.labels.version and the container image. I can DM the diff if that'd help.
c

creamy-potato-29402

09/13/2019, 9:08 PM
@full-dress-10026 yes please!
that should not trigger a replacement.
Just looping back, this triggered a replacement because
.spec.selector
is immutable. If you change that, ever, you must replace it. Kubernetes will reject any changes to this field.
👍 1
f

full-dress-10026

09/13/2019, 9:19 PM
Thanks!