I’m using Pulumi to manage kubernetes deployments....
# general
c
I’m using Pulumi to manage kubernetes deployments. One of my pods runs a process that should emit a message like
{"Namespace":"default","Signal":"terminated","TaskQueue":"main-task-queue","WorkerID":"34276@Paymahns-Air@","level":"error","msg":"Worker has been stopped.","time":"2021-06-17T10:33:39-05:00"}
(notice the
Signal
field) when it receives a SIGTERM or SIGINT. I’ve noticed that when I perform
pulumi up
(with a new docker image) the pod gets killed without ever emitting the above log message which suggests to me that the pod is getting a SIGKILL without having the option to shutdown gracefully. Is this how
pulumi up
is intended to behave? Do I have something misconfigured? Is there another explanation?
b
pulumi just interacts with the Kubernetes api, so I'd be surprised if this was pulumi specific...
if you're doing a pulumi up with a new image, it'll just update the deployment with the new image via the API, the rest happens at the kubernetes layer
c
I actually ended up filing an issue on github as well and found the root cause (not related to pulumi). Details can be found at https://github.com/pulumi/pulumi/issues/7330
b
looks like 'dumb-init` will help with that, fyi
c
oh nice, that seems perfect, will give it a shot today. Thanks for the pointer 🙂