Hi all we use automation API with azureNative prov...
# automation-api
q
Hi all we use automation API with azureNative provider and during deployments (cosmos account/cosmos database/cosmos container/dns zone and other resources), sometimes we encounter LogException, ex
Copy code
I0916 10:13:10.119561   85179 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(azure-native:cosmosdb:DatabaseAccount,database-name)]: isZoneRedundant={true}
I0916 10:13:10.119566   85179 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(azure-native:cosmosdb:DatabaseAccount,database-name)]: locationName={East US}
I0916 10:13:10.119575   85179 rpc.go:77] Marshaling property for RPC[ResourceMonitor.RegisterResource(azure-native:cosmosdb:DatabaseAccount,database-name)]: provisioningState={Succeeded}
I0916 10:13:10.122488   85179 langruntime_plugin.go:431] langhost[client].Run(pwd=/home/site/wwwroot,root=/tmp/automation-ouc15u0b.wwu, program=/home/site/wwwroot, entryPoint=.,...,dryrun=false) success: progerr=Error occurred during logging, bail=true
I0916 10:13:10.122510   85179 source_eval.go:335] Program exited with error: BAIL: run bailed
I0916 10:13:10.122521   85179 source_eval.go:266] EvalSourceIterator ended with bail.
I0916 10:13:10.122532   85179 deployment_executor.go:288] deploymentExecutor.Execute(...): incoming source event (nil? true, BAIL: run bailed)
I0916 10:13:10.122546   85179 step_executor.go:616] StepExecutor worker(-1): StepExecutor.waitForCompletion(): waiting for worker threads to exit
I0916 10:13:10.122554   85179 step_executor.go:616] StepExecutor worker(0): worker exiting due to cancellation
I0916 10:13:10.122562   85179 step_executor.go:616] StepExecutor worker(6): worker exiting due to cancellation
I0916 10:13:10.122568   85179 step_executor.go:616] StepExecutor worker(3): worker exiting due to cancellation
I0916 10:13:10.122575   85179 step_executor.go:616] StepExecutor worker(7): worker exiting due to cancellation
I0916 10:13:10.122571   85179 step_executor.go:616] StepExecutor worker(2): worker exiting due to cancellation
I0916 10:13:10.122580   85179 step_executor.go:616] StepExecutor worker(5): worker exiting due to cancellation
I0916 10:13:10.122585   85179 step_executor.go:616] StepExecutor worker(1): worker exiting due to cancellation
I0916 10:13:10.122592   85179 step_executor.go:616] StepExecutor worker(4): worker exiting due to cancellation
I0916 10:13:10.122605   85179 step_executor.go:616] StepExecutor worker(-1): StepExecutor.waitForCompletion(): worker threads all exited
I0916 10:13:10.122616   85179 step_executor.go:616] StepExecutor worker(-1): registered resource outputs urn:pulumi:some-name::cosmosAccount::pulumi:pulumi:Stack::stack-name: old=#0, new=#0
I0916 10:13:10.122657   85179 snapshot.go:138] SnapshotManager: eliding RegisterResourceOutputs due to equal outputs
I0916 10:13:10.122665   85179 deployment_executor.go:358] deploymentExecutor.Execute(...): step executor has completed
I0916 10:13:10.122869   85179 deployment_executor.go:147] deploymentExecutor.Execute(...): exiting provider canceller
I0916 10:13:10.123684   85179 plugin.go:565] killing plugin /root/.pulumi/plugins/resource-azure-native-v3.1.0/pulumi-resource-azure-native
I0916 10:15:10.121649   85179 update.go:242] *** Update(preview=false) complete ***
I0916 10:15:10.131378   85179 snapshot.go:790] SnapshotManager: flushing elided writes...
I0916 10:15:10.155043   85179 state.go:341] file .pulumi/stacks/cosmosAccount/some-id.json.gz does not exist, skipping backup
I0916 10:15:10.187460   85179 state.go:246] Saved stack organization/cosmosAccount/some-id checkpoint to: .pulumi/stacks/cosmosAccount/some-id.json (backup=.pulumi/stacks/cosmosAccount/some-id.json.bak)
Is there some way to prevent such fails, idk, cutting out some logs or fail silently? Or maybe it's common issue and we can do some walkaround? Or maybe it's something wrong with our design? I appreciate all help. Thanks, Grzesiek
e
Can you raise an issue on our dotnet repo (github.com/pulumi/pulumi-dotnet), it's not expected to be getting LogExceptions unless the engine has already crashed out, but if were not printing that then we might be missing a debug print somewhere. Needs looking into.
🫡 1
q
it's not expected to be getting LogExceptions unless the engine has already crashed out
so I assume we can't actually workaround it? And probably it's some bug in our or pulumi engine code?
e
Yeh I don't think theres anyway to work around it, but its probably a pulumi bug where the engine is either crashing or the runtime is still trying to log after shutdown