This message was deleted.
# kubernetes
s
This message was deleted.
m
Hi Johan, Rolling Update should be the default in k8s. To see if the pod is really healty and ready you should have the healthiness and readiness probes in place.
q
Yeah, for regular updates to the deployment there's no issue - I can update the image version just fine, for example. Liveness & readiness probes are also configured, and I have a pod disruption budget configured. However, when I modify the ConfigMap that my deployment uses, it apparently has to be destroyed first, before the new version can be created, and this propagates to the deployment itself.
From the research I did, it seems to be a longstanding issue due to how Pulumi and Kubernetes interact.
In general I'm fine with replacing instead of updating the deployment, but I would like to create the new deployment first, and only destroy the old one after the new one is up and running in a healthy state.
e
I've read a GitHub issue about this, but can't find it. Do you know the issue URL?
s
when I modify the ConfigMap that my deployment uses, it apparently has to be destroyed first
do you have "deleteBeforeReplace: true" set on the ConfigMap? The default behavior of Pulumi should not be to delete the ConfigMap before creating the new one. Pulumi should create the new ConfigMap, trigger a rolling update on the Deployment, and if successful, lastly delete the old ConfigMap.
q
No, I haven't touched that one.. but apparently Pulumi uses that strategy by default in this case 😅
Ah.. 😂 I had originally copied parts of the Pulumi code from elsewhere, and it was setting a hardcoded name for the ConfigMap resource in Kubernetes, thus forcing a delete-before-replace:
Copy code
config = k8s.core.v1.ConfigMap(
    f"{app_name}-config",
    metadata=k8s.meta.v1.ObjectMetaArgs(
        name=f"{app_name}-config",
    ),
    data={...},
)
Once I removed that, to let it pick an autogenerated suffix for the name, things work as I expected, just doing an update of the deployment instead 🎉
m
Just for the records: The update of the content of the CM has to be handled from the app itself. Thats, why having the name with a hash (like kustomize is doing too on secrets and cm) is handy if you want the redeploy of the pod to pickup the changes due to no app intern mechanism.
👍 1