sparse-intern-71089
05/26/2021, 11:35 PMprehistoric-london-9917
05/26/2021, 11:37 PMprehistoric-london-9917
05/26/2021, 11:38 PMbillowy-army-68599
prehistoric-london-9917
05/26/2021, 11:41 PMprehistoric-london-9917
05/26/2021, 11:41 PMpreview throws, which is understandable. up behaves the same way, though:
pulumi:pulumi:Stack (demo_env-mattr):
error: Running program '/Users/matthew.riedel/Source/devex/demo_env' failed with an unhandled exception:
Error: invocation of aws:lb/getLoadBalancer:getLoadBalancer returned an error: invoking aws:lb/getLoadBalancer:getLoadBalancer: 1 error occurred:
* error retrieving LB: LoadBalancerNotFound: Load balancers '[mattr-mattr-lb]' not found
status code: 400, request id: 25a5f702-7435-435c-a5a7-ccd7aa948357billowy-army-68599
status field that gets populated with the load balancer name? if it does, might be able to use an apply() on thatprehistoric-london-9917
05/26/2021, 11:42 PMbillowy-army-68599
billowy-army-68599
prehistoric-london-9917
05/26/2021, 11:44 PMprehistoric-london-9917
05/26/2021, 11:44 PMprehistoric-london-9917
06/01/2021, 8:31 PM.apply approaches (the current one below), and they exhibit the same issue. My hunch is that k8s gets the LB hostname, but AWS hasn’t actually finished creating it yet, so when getLoadBalancer is called, it’s not found. I suppose I could wrap the whole thing in some sort of delay, but wondering if there might be a better solution.prehistoric-london-9917
06/02/2021, 3:19 AMts-retry module as a workaround. This seems to work for my purposes.prehistoric-london-9917
06/02/2021, 3:19 AM