I'm having an issue where the CLI just hangs on `p...
# general
b
I'm having an issue where the CLI just hangs on
pulumi up
I've had a coworker test and he is able to run
up
on my stack using files from my branch. Our working dirs are identical. Any idea why mine's hanging? I'll run with verbosity cranked up and add to this as a thread.
Copy code
pulumi --logtostderr -v=6 up
I1105 13:47:00.265867   13151 backend.go:408] found username for access token
Previewing update (ware2go/chandler):
I1105 13:47:01.159588   13151 plugins.go:427] GetPluginPath(language, nodejs, <nil>): found on $PATH /usr/local/bin/pulumi-language-nodejs
I1105 13:47:01.412550   13151 plugins.go:466] GetPluginPath(resource, cloudflare, 0.17.14): enabling new plugin behavior
I1105 13:47:01.412581   13151 plugins.go:513] GetPluginPath(resource, cloudflare, 0.17.14): found in cache at /Users/mchandler/.pulumi/plugins/resource-cloudflare-v0.17.14/pulumi-resource-cloudflare
I1105 13:47:01.412825   13151 plugins.go:466] GetPluginPath(resource, docker, 0.17.2): enabling new plugin behavior
I1105 13:47:01.412841   13151 plugins.go:513] GetPluginPath(resource, docker, 0.17.2): found in cache at /Users/mchandler/.pulumi/plugins/resource-docker-v0.17.2/pulumi-resource-docker
I1105 13:47:01.413068   13151 plugins.go:466] GetPluginPath(resource, kubernetes, 0.24.0): enabling new plugin behavior
I1105 13:47:01.413083   13151 plugins.go:513] GetPluginPath(resource, kubernetes, 0.24.0): found in cache at /Users/mchandler/.pulumi/plugins/resource-kubernetes-v0.24.0/pulumi-resource-kubernetes
I1105 13:47:01.413102   13151 plugins.go:427] GetPluginPath(language, nodejs, <nil>): found on $PATH /usr/local/bin/pulumi-language-nodejs
I1105 13:47:01.413354   13151 plugins.go:466] GetPluginPath(resource, aws, 0.18.26): enabling new plugin behavior
I1105 13:47:01.413374   13151 plugins.go:513] GetPluginPath(resource, aws, 0.18.26): found in cache at /Users/mchandler/.pulumi/plugins/resource-aws-v0.18.26/pulumi-resource-aws

I1105 13:47:01.413634   13151 plan_executor.go:407] planExecutor.retirePendingDeletes(...): no pending deletions
I1105 13:47:01.413654   13151 plan_executor.go:217] planExecutor.Execute(...): waiting for incoming events
I1105 13:47:01.413685   13151 step_executor.go:321] StepExecutor worker(-2): worker coming online
I1105 13:47:01.413694   13151 step_executor.go:321] StepExecutor worker(-2): worker waiting for incoming chains
I1105 13:47:01.910149   13151 eventsink.go:60] Registering resource: t=pulumi:pulumi:Stack, name=infra-management-chandler, custom=false
I1105 13:47:01.914954   13151 eventsink.go:60] RegisterResource RPC prepared: t=pulumi:pulumi:Stack, name=infra-management-chandler
I1105 13:47:01.915742   13151 source_eval.go:787] ResourceMonitor.RegisterResource received: t=pulumi:pulumi:Stack, name=infra-management-chandler, custom=false, #props=0, parent=, protect=false, provider=, deps=[], deleteBeforeReplace=<nil>, ignoreChanges=[], aliases=[], customTimeouts={0 0 0}
I1105 13:47:01.915775   13151 source_eval.go:147] EvalSourceIterator produced a registration: t=pulumi:pulumi:Stack,name=infra-management-chandler,#props=0
I1105 13:47:01.915787   13151 plan_executor.go:221] planExecutor.Execute(...): incoming event (nil? false, <nil>)
I1105 13:47:01.915791   13151 plan_executor.go:380] planExecutor.handleSingleEvent(...): received RegisterResourceEvent
I1105 13:47:01.915825   13151 step_executor.go:321] StepExecutor worker(-2): worker received chain for execution
I1105 13:47:01.915832   13151 step_executor.go:321] StepExecutor worker(-2): worker waiting for incoming chains
I1105 13:47:01.915859   13151 step_executor.go:321] StepExecutor worker(0): launching oneshot worker
I1105 13:47:01.915900   13151 step_executor.go:321] StepExecutor worker(0): applying step create on urn:pulumi:chandler::infra-management::pulumi:pulumi:Stack::infra-management-chandler (preview true)
I1105 13:47:01.916020   13151 step_executor.go:321] StepExecutor worker(0): step create on urn:pulumi:chandler::infra-management::pulumi:pulumi:Stack::infra-management-chandler retired
I1105 13:47:01.916037   13151 source_eval.go:822] ResourceMonitor.RegisterResource operation finished: t=pulumi:pulumi:Stack, urn=urn:pulumi:chandler::infra-management::pulumi:pulumi:Stack::infra-management-chandler, stable=false, #stables=0 #outs=0
     Type                 Name                       Plan
 +   pulumi:pulumi:Stack  infra-management-chandler  create
I1105 13:47:01.919835   13151 eventsink.go:60] RegisterResource RPC finished: resource:infra-management-chandler[pulumi:pulumi:Stack]; err: null, resp: urn:pulumi:chandler::infra-management::pulumi:pulumi:Stack::infra-management-chandle +   pulumi:pulumi:Stack  infra-management-chandler  create
w
If you are seeing a hang - it is most likely https://github.com/pulumi/pulumi/issues/3309. This was fixed in recent Pulumi versions, and is primarily only an issue on pretty recent Node versions - which may explain the difference between what you and your coworker are seeing. Can you update to latest Pulumi versions (both CLI and packages) and/or move to an earlier Node version?
b
Awesome! Thank you for the quick response.
I had already tried a few pulumi versions which didn't seem to be the issue
but my coworker had node 10 and I was running 12.12. downgrading node fixed the issue
thanks again
c
For future reference, in case you want to try Node 12 again, you should also update the npm packages in your
package.json
to the latest (or to the version mentioned in the issue linked by Luke above), which should fix the hanging issue with Node 12.12. If you have a
package.lock
or
yarn.lock
file, you’d want to make sure that old versions are not in it.