q

    quaint-match-50796

    2 months ago
    Hi, do anyone have issues when:1. Create EKS cluster 2. Setup AWS Load Balanacer Controller 3. Create some services that deploys load balancers 4. Try to take down stack As a result, we got stuck at delete subnet because the load balancers are not deleted. Anyone has implemented anything? Options could be manual removal or listing and removing, that I can think of.
    p

    polite-napkin-90098

    2 months ago
    As the ALB are created out of band (by the controller in the EKS cluster) and not by pulumi I think the best thing might be to import them after creation and sort out the dependency tree so they get deleted before the subnets.
    I'm just starting to do the same thing as you, so am interested in thee solution here.
    s

    salmon-account-74572

    2 months ago
    I have observed the same behavior and AFAIK @polite-napkin-90098’s response is accurate. We’ve had to remove the load balancers first (by deleting the workloads on the cluster) and then run
    pulumi destroy
    . I haven’t found an automated solution yet.
    q

    quaint-match-50796

    2 months ago
    @polite-napkin-90098 Yes, this is one solution. I'm now investigating the Automation API, so we can detect destroy event easier and based on tags into load balancer (which we get with getLoadBalancer) remove the ones we wish.
    @salmon-account-74572 Yes, delete the workload first would be a fit also. But sometimes we just one a fast cluster removal. Automation API and being able to get specific load balancers is one solution we are experimenting.
    s

    salmon-account-74572

    2 months ago
    @quaint-match-50796 I look forward to hearing about the results of your testing/investigation!