https://pulumi.com logo
b

best-xylophone-83824

09/02/2019, 1:05 PM
does anybody experience slowness of pulumi runs? Simple run with no refresh, only a single stack output variable added, with no resources unchanged per plan has been running for 10 minutes by now... I'll try to gather more debug info
t

tall-librarian-49374

09/02/2019, 1:08 PM
What do you see during those 10 minutes? Which OS are you on? Which provider?
b

best-xylophone-83824

09/02/2019, 1:10 PM
It just showed full output and then hangs, for 10 mins ,then prints:
Copy code
Resources:
    159 unchanged

Duration: 10m20s
Permalink: <https://app.pulumi.com/Nakhoda/gcp-gke/gke01-london/updates/129>
we don't use
--refresh
and there were no changes apart from added output variable, so it never hit provider (or it shouldn't at least the way I understand they way pulumi works).
t

tall-librarian-49374

09/02/2019, 1:13 PM
Interesting. Is it just once or does it happen every time?
b

best-xylophone-83824

09/02/2019, 1:18 PM
it is reproducible, I just reran same job on a pipeline, zero changes this time , yet it's been 4 minutes still and looks very slow.it trickles "output" objects one by one once a minutes or so. once finishes I'll rerun same commit but from my dev machine to rule out our CI setup
w

white-balloon-205

09/02/2019, 7:54 PM
Could you run with
—logtostderr -v=9
and see if there are any hints of what is taking so long?
b

best-xylophone-83824

09/03/2019, 9:41 AM
ha, it exceeded CI job output limit 🙂 if I don't set
--logtostderr
which file does it log to?
@tall-librarian-49374, checked output, there is a
Marked old state snapshot as done
record for each resource, which seems to be taking 2.3-2.5 seconds each. we have 165 resources so far, so it quickly adds up and our no-op
pulumi up
end up running for > 10 minutes
t

tall-librarian-49374

09/04/2019, 11:41 AM
Aha! Thanks, I'll ask around.
Could you file an issue for that?
b

best-xylophone-83824

09/04/2019, 11:45 AM
doing it now
👍 1
t

tall-librarian-49374

09/04/2019, 11:53 AM
thank you!
b

best-xylophone-83824

09/04/2019, 4:49 PM
@white-balloon-205, @creamy-potato-29402 , this slowness makes us unproductive, can you think of a workaround we can apply to speed it up? maybe convert to local backend do a full run and convert back ? anything would help
w

white-balloon-205

09/04/2019, 4:50 PM
Looking into it now. Could you share any more of the detailed logs?
b

best-xylophone-83824

09/04/2019, 4:50 PM
@white-balloon-205, can send you full logs over email
w

white-balloon-205

09/04/2019, 4:50 PM
Also - what is the stack name in question?
b

best-xylophone-83824

09/04/2019, 4:51 PM
sent, please confirm if you've received it
w

white-balloon-205

09/04/2019, 4:57 PM
Yes - I got the mail. Will look at that and reply on the issue once we understand what might be causing this.
b

best-xylophone-83824

09/04/2019, 4:58 PM
Thanks!