03/20/2023, 9:05 AM
Good morning! I have some questions about how Pulumi responds to memory limits set in Kubernetes. First some context: In our project, we are currently using a node.js server together with pulumi automation to do a pulumi up on a specific api request. Our server is deployed using Kubernetes where we initially set a memory limit of 512Mi. However, when doing a pulumi up through our server, we received the following error:
error: an unhandled error occurred: Program exited with non-zero exit code: -1
. We’ve tried to obtain more logs, diagnostics, etc. with several commands, but nothing really provided more information about why pulumi errored out. Which, as you probably can guess, turned out to be quite a pain to debug. At some point, we started to take a look at the memory the server consumes while doing a request. It turned out that Pulumi consumes the memory it needs, until it almost reaches the limit set through k8s and then throws an error (which was just shared) when the limit seems to be lower than what it requires to complete. I upped the memory gradually, until it had enough memory (roughly 1.5 gigs) and it executed correctly without errors. My questions: Why is Pulumi currently configured to not crash a pod when it has a memory limit that is lower than the amount it requires? As a start, it would make sense to make the error message in this scenario more descriptive so that it is instantly clear that a memory limit was reached for pulumi. Next to that, would it make sense to change the config/code and let it just consume the memory it needs and crash the pod in a scenario where the limit was reached? Thank you for reading through and looking forward to answers! 🙂