https://pulumi.com logo
Title
c

cuddly-leather-18640

06/17/2021, 3:38 PM
I’m using Pulumi to manage kubernetes deployments. One of my pods runs a process that should emit a message like
{"Namespace":"default","Signal":"terminated","TaskQueue":"main-task-queue","WorkerID":"34276@Paymahns-Air@","level":"error","msg":"Worker has been stopped.","time":"2021-06-17T10:33:39-05:00"}
(notice the
Signal
field) when it receives a SIGTERM or SIGINT. I’ve noticed that when I perform
pulumi up
(with a new docker image) the pod gets killed without ever emitting the above log message which suggests to me that the pod is getting a SIGKILL without having the option to shutdown gracefully. Is this how
pulumi up
is intended to behave? Do I have something misconfigured? Is there another explanation?
b

billowy-army-68599

06/17/2021, 6:07 PM
pulumi just interacts with the Kubernetes api, so I'd be surprised if this was pulumi specific...
if you're doing a pulumi up with a new image, it'll just update the deployment with the new image via the API, the rest happens at the kubernetes layer
c

cuddly-leather-18640

06/17/2021, 7:34 PM
I actually ended up filing an issue on github as well and found the root cause (not related to pulumi). Details can be found at https://github.com/pulumi/pulumi/issues/7330
b

billowy-army-68599

06/17/2021, 8:08 PM
looks like 'dumb-init` will help with that, fyi
c

cuddly-leather-18640

06/18/2021, 3:25 PM
oh nice, that seems perfect, will give it a shot today. Thanks for the pointer 🙂