Hey Pulumi Folks, we are running into an issue. W...
# google-cloud
a
Hey Pulumi Folks, we are running into an issue. We have an engineer that can run
import
and
preview
commands in other projects, but there is one project where these commands hang. Other folks (I confirmed it myself) can run these commands on the project in question, so it is not a project configuration issue. I had her run with verbose logging using:
Copy code
pulumi preview --logtostderr --logflow -v=9 2> out.txt
and there looks like there is an issue with something timing out and retrying. Here is a section of the log, right before it just keeps retrying:
Copy code
I1101 11:16:43.185967    7746 eventsink.go:59] RegisterResource RPC prepared: t=gcp:monitoring/alertPolicy:AlertPolicy, name=gcp.monitoring.AlertPolicy.tealiumSlackAbandonmentAlertPolicyV1
I1101 11:16:43.185995    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>RegisterResource RPC prepared: t=gcp:monitoring/alertPolicy:AlertPolicy, name=gcp.monitoring.AlertPolicy.tealiumSlackAbandonmentAlertPolicyV1<{%reset%}>)
I1101 11:16:43.186782    7746 eventsink.go:59] RegisterResourceOutputs RPC prepared: urn=urn:pulumi:playground::best-airflow::pulumi:pulumi:Stack::best-airflow-playground
I1101 11:16:43.186810    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>RegisterResourceOutputs RPC prepared: urn=urn:pulumi:playground::best-airflow::pulumi:pulumi:Stack::best-airflow-playground<{%reset%}>)
I1101 11:16:43.187671    7746 eventsink.go:59] RegisterResourceOutputs RPC finished: urn=urn:pulumi:playground::best-airflow::pulumi:pulumi:Stack::best-airflow-playground; err: null, resp: 
I1101 11:16:43.187702    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>RegisterResourceOutputs RPC finished: urn=urn:pulumi:playground::best-airflow::pulumi:pulumi:Stack::best-airflow-playground; err: null, resp: <{%reset%}>)
I1101 11:16:43.188499    7746 eventsink.go:59] RegisterResourceOutputs RPC finished: urn=urn:pulumi:playground::best-airflow::pulumi:pulumi:Stack::best-airflow-playground; err: null, resp: 
I1101 11:16:43.188525    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>RegisterResourceOutputs RPC finished: urn=urn:pulumi:playground::best-airflow::pulumi:pulumi:Stack::best-airflow-playground; err: null, resp: <{%reset%}>)
I1101 11:17:09.970503    7746 eventsink.go:59] Dismissed an error as retryable. marked as timeout - Get "<https://openidconnect.googleapis.com/v1/userinfo?alt=json>": Post "<https://oauth2.googleapis.com/token>": dial tcp: lookup <http://oauth2.googleapis.com|oauth2.googleapis.com>: i/o timeout
I1101 11:17:09.970558    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>Dismissed an error as retryable. marked as timeout - Get "<https://openidconnect.googleapis.com/v1/userinfo?alt=json>": Post "<https://oauth2.googleapis.com/token>": dial tcp: lookup <http://oauth2.googleapis.com|oauth2.googleapis.com>: i/o timeout<{%reset%}>)
I1101 11:17:09.970944    7746 eventsink.go:59] Dismissed an error as retryable. marked as timeout - Post "<https://oauth2.googleapis.com/token>": dial tcp: lookup <http://oauth2.googleapis.com|oauth2.googleapis.com>: i/o timeout
I1101 11:17:09.970983    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>Dismissed an error as retryable. marked as timeout - Post "<https://oauth2.googleapis.com/token>": dial tcp: lookup <http://oauth2.googleapis.com|oauth2.googleapis.com>: i/o timeout<{%reset%}>)
I1101 11:17:09.971231    7746 eventsink.go:59] Dismissed an error as retryable. marked as timeout - dial tcp: lookup <http://oauth2.googleapis.com|oauth2.googleapis.com>: i/o timeout
I1101 11:17:09.971265    7746 eventsink.go:62] eventSink::Debug(<{%reset%}>Dismissed an error as retryable. marked as timeout - dial tcp: lookup <http://oauth2.googleapis.com|oauth2.googleapis.com>: i/o timeout<{%reset%}>)
I1101 11:17:09.971552    7746 eventsink.go:59] Waiting 500ms before next try
g
Just guessing here, but sounds like an IAM issue? She doesn't have access to something rest of you do? And GCP is potentially acting strangely or Pulumi is acting strangely due to this
a
The engineer in question is in a group in the project owner role. I do not think it’s an IAM issue.