Hi there, I think I noticed a bug, or maybe I’m do...
# general
Hi there, I think I noticed a bug, or maybe I’m doing something wrong. I’d like your advice before opening an issue. When I update a
configuration, I noticed it creates a new replicaset and does not delete the old one (or maybe it should update it, I don’t know). So it actually kills all the old pods in the old replicaset and creates the new one as expected, but now I have a stale useless replicasets. Is this something I misconfigured somewhere or is this a bug ?
Copy code
replicaset.apps/context-clusters-pvmtest-mqtt-5979fbcdbc                       2         2         2       2m13s
replicaset.apps/context-clusters-pvmtest-mqtt-client-68fd4d4445                0         0         0       23m
replicaset.apps/context-clusters-pvmtest-mqtt-client-6b8bdfc4b7                2         2         2       2m13s
replicaset.apps/context-clusters-pvmtest-mqtt-f5bd9548f                        0         0         0       25m
replicaset.apps/context-clusters-pvmtest-portal-567948dccd                     2         2         2       2m13s
replicaset.apps/context-clusters-pvmtest-portal-6c5968cfcd                     0         0         0       26m
Hello. I don’t think this is related to Pulumi, since the replicaset management in a deployment is part of k8s. Here’s more info of what k8s does when you update a Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
👍 1
you can change the number of previous versions to keep if 10 is too many
thanks @plain-businessperson-30883 and @better-rainbow-14549 for pointing this out, I’m still new to k8s. Since the cluster is managed by pulumi, I’m not sure if we need this mechanism. In this case the rollback should be by reverting our code and/or our stack config to deploy (
pulumi up
) the previous version of the deployment right ?
To be honest, I’m not sure if one thing replace the other. k8s deployments will create a new RS anyways to switch to the new one. Maybe just lowering the “rs to keep” as @better-rainbow-14549 pointed out would be enough for you.
Yes, but I mean, I don’t know how/if pulumi can use the previous RS to do a rollback, and if not, mixing manual operations like rolling back a deployment and automated management with pulumi could lead to inconsistencies. So I feel like we have to choose one over the other in this case (and yes, if so we can/have to lower the RS to keep after a deployment)
pulumi wouldnt use previous ones but up until the point where the every deployment in the replica set is running without error, kube will
i wouldnt have any qualms about setting it to 1 instead of 10
@better-rainbow-14549 @plain-businessperson-30883 I realize I didn’t thank you both for the insights here. Was really helpful to better understand it. So thanks guys 🙂
👍 1
No problem. I’m glad to help 🙂