Hi :wave: We have a microservice that uses the `p...
# automation-api
d
Hi 👋 We have a microservice that uses the
pulumi automation API
to create resources on GCP and AWS. There is a daily cron that triggers pulumi execution - there are about 15 stacks that being executed one by one. Typescript code, pulumi version v3.107.0 We see two troubling behaviors: 1. Silent failures a. The code is doing
stack.refresh()
and
stack.up()
afterwards b. Statistically and without any recognized pattern, either one of the above actions can simply stop without throwing an error. c. The side-effect is the lock file isn’t being deleted, thus leaving the stack locked. d. It’s very hard to reproduce, as locally, per stack execution, we see it running as expected. e. Are you familiar with such a behavior? What can cause that? 2. Aggressive crashes a. Sometimes, we see many errors like
failed to register new resource X: Resource monitor is terminating
b. The above errors cannot be caught which in turn, crash the microservice completely c. We saw on Github a PR that meant to prevent this behavior. Around 2 months ago there was a change that might affected this behavior (line 522). I’m not sure. d. What can cause the resource monitor to fail registering resources in the first place? e. Are you aware of such situation? Thanks! @crooked-cat-6983, @dazzling-salesclerk-51689
m
Yeah, that doesn’t sound right. Seems like it may be worth filing an issue. Have you tried calling
up
with a refresh option rather than making the two calls in succession? Not an ideal solution perhaps, but curious if that might work for you in the meantime. https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/pulumi/automation/#UpOptions-refresh
d
Thanks @miniature-musician-31262 for your reply. Your suggestion made me realize our pulumi runtime is the latest but the nodejs version is old (the option to refresh wasn’t available). We’ve upgraded to the latest but now there are new errors
Copy code
error: unable to validate AWS credentials
We will look into it