rhythmic-finland-36256
06/15/2020, 7:03 PM* updating urn:pulumi:dev::streamm::ajaegle:azureaks:AksCluster$azure:containerservice/kubernetesCluster:KubernetesCluster::streamm-dev-aks: updating Managed Kubernetes Cluster "streamm-dev-aks8b71e006" (Resource Group "streamm-devf3b8623a"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidLoadBalancerProfile" Message="Load balancer profile must specify one of ManagedOutboundIPs, OutboundIPPrefixes and OutboundIPs." Target="networkProfile.loadBalancerProfile"
pulumi up
). Every additional update though ends up in this error.Standard SKU
loadbalancers and didn’t specify the loadBalancerIP upfront as this is handled afterwards by deploying the ingress controller.pulumi up
this afternoon. Is it possible that things change in the background even if I specify explicit versions and use npm ci
leveraging the package-lock.json?loadBalancerProfile
with the ingress IP already set like
loadBalancerProfile: {
outboundIpAddressIds: [args.loadBalancerIpForEgress.id],
},
famous-jelly-72366
06/16/2020, 2:30 PMloadBalancerProfile: {
managedOutboundIpCount: 2,
}
rhythmic-finland-36256
06/16/2020, 4:56 PMkubectl
so it was created successfully) . That error only occurs when performing another pulumi up
with an existing cluster (even if there are no programmatical changes to the AKS part of the pulumi program).npm install
). I removed my node_modules
again and did a fresh npm ci
(not npm install
to make sure I get the locked versions. How can I find out what’s happening here?