https://pulumi.com logo
Title
t

tall-librarian-49374

03/13/2021, 3:10 PM
@better-shampoo-48884 there are built-in retry mechanisms and they are developed my Microsoft in the library that we use. So, an error like this is deemed non-retriable at some point. This seems to warrant a github issue, especially if you have a relatively reliable repro.
b

better-shampoo-48884

03/13/2021, 3:13 PM
I would, but I'm not certain I can create a shareable variant that can reproduce the behavior.. the network setup as it is right now is slightly sensitive and I'm not certain I've got the authority to share the composition 😉 If I have time i'll create a generic version that reproduces the issue and submit that at some point - I presumed there was a retry, just found it strange that I had this happen three times in a row (though on two different subnets, so I guess i'm perpetually unlucky with this at the moment).
In other news, this might just be the perfect excuse for me to get in to the whole automation-api side of things 😄
Any thoughts on what the "storage X account could not be could not be created because storage account X was not found" issue is all about? happened at least twice, not certain I can reproduce it every time yet.
t

tall-librarian-49374

03/13/2021, 3:15 PM
You can also turn full logging on that will capture raw HTTP requests. If you can share those (obfuscated), we can find the cause.
No, I haven’t seen this error before
b

better-shampoo-48884

03/13/2021, 3:16 PM
excellent - I'll turn that on the next time I rebuild and hopefully we'll catch it 😉
t

tall-librarian-49374

03/13/2021, 3:17 PM
It’s something like
pulumi up -v=9 --debug --logflow --logtostderr
b

better-shampoo-48884

03/13/2021, 3:17 PM
yup, seen those flags somewhere - I'll get to them. so far only run traces (and that's awesome by the way, thanks for including that!)
Sorry for bringing this up again.. I just hit on this problem twice again while trying to recreate the environment (so i destroyed it all again, then brought up the RG and KeyVault, and hit apply again - same thing happened).. so I did another wipe down to rg+keyvault state, and decided to log it. Now it didn't happen 😕
The reaason why i'm mentioning htis is that I feel it might actually be a race condition issue - and the fact of debuglogging might have slowed it down enough to avoid it?
@tall-librarian-49374 - would it be possible for me to send you the output of the log more directly? I've finally reproduced it cleanly by trying to up a new resource group with only network components in it. I can send you the code I use as well if need be - but it does show the "anotherOperationInProgress" issue:
<{%reset%}>)
I0325 09:02:09.325742    6864 eventsink.go:78] eventSink::Infoerr(<{%reset%}>  "error": {
<{%reset%}>)
 +  pulumi:pulumi:Stack baseline-infra-dev.infra.infratesting creating X-Ms-Arm-Service-Request-Id: 85df7794-e50f-456e-b26f-b38ee3932adc
I0325 09:02:09.325742    6864 eventsink.go:78] eventSink::Infoerr(<{%reset%}>    "code": "AnotherOperationInProgress",
<{%reset%}>)
 +  pulumi:pulumi:Stack baseline-infra-dev.infra.infratesting creating X-Ms-Correlation-Request-Id: 4277e52d-a5f0-4891-98c7-566c8fd5c8f5
I0325 09:02:09.325742    6864 eventsink.go:78] eventSink::Infoerr(<{%reset%}>    "message": "Another operation on this or dependent resource is in progress. To retrieve status of the operation use uri: <https://management.azure.com/subscriptions/><redacted>/providers/Microsoft.Network/locations/westeurope/operations/8974f472-6871-4414-817a-fd8a78038cb1?api-version=2020-08-01.",
<{%reset%}>)
I0325 09:02:09.326304    6864 eventsink.go:78] eventSink::Infoerr(<{%reset%}>    "details": []
<{%reset%}>)
I0325 09:02:09.326365    6864 eventsink.go:78] eventSink::Infoerr(<{%reset%}>  }
t

tall-librarian-49374

03/25/2021, 8:16 AM
sure, send it to mikhail@pulumi.com
b

better-shampoo-48884

03/25/2021, 8:17 AM
thanks!
would you mind if i zip them? any preference between .zip or .7z?
t

tall-librarian-49374

03/25/2021, 8:31 AM
Got your email, thank you