https://pulumi.com logo
#aws
Title
# aws
b

bored-barista-23480

02/15/2022, 3:09 PM
Hi folks! I'm trying to deploy a setup to an EKS cluster. A service is made public via an Application Load Balancer set up by the Load Balancer Controller in my setup. But
pulumi destroy
never works, it always hangs on the Ingress resource. Manually deleting the finalizer set on the Ingress by the LBC while
destroy
is ongoing works. But even when I explicitly make the Ingress depend on the Helm chart of the LBC the LBC pods get destroyed before the Ingress, and most important: before the finalizer was removed from the Ingress by the LBC. So the destroy-operation always needs manual intervention. Does anyone have an idea what causes this behavior or even how to solve it?
i

incalculable-midnight-8291

02/16/2022, 3:39 PM
Same here. It prob happens because the lb that gets created from the service isnt visible for pulumi. So it tears down the eks cluster, because it doenst seem to be any reason to uninstall everything first. But that leaves the lb, so that the vpc isnt empty … and yeah. Messed up state. What we did is that we separated it so that we only setup aws infra with pulumi, and roll out ingress controller etc with helm afterwards, then custom script to destroy that uninstalls the ingress controller first. I would love to set it up so that we precreate the lb ourselves and tie the ingress controller to that lb with service type nodeport instead of type loadbalancer.
b

bored-barista-23480

02/16/2022, 4:30 PM
But shouldn't the LBC on deletion via Kubernetes wait until the LB was deleted, remove the finalizer from the Ingress and THEN shut down? I wonder if it has something to do with a SIGKILL beeing initiated by Kubernetes/Docker because the shutdown of LBC takes to long after a SIGTERM. And maybe the LBC couldn't completed a graceful termination.
I also thought of writing a weird dynamic resource that calls the Kubernetes API and removes the finalizer on a
destroy
operation and using some clever explicitly set dependsOn values.
@incalculable-midnight-8291 I found the solution - at least for my setup. Maybe it could help you too. Helm charts are of type
ComponentResource
and thus using them as
dependsOn
values has no effect (seems to be something the Pulumi team is working on to solve). That's why the LBC terminates without having the Ingress and AWS resources, and thus the finalizer, removed. The solution is not to pass
[helmChart]
as a dependency but
helmChart.ready
.
👍 1
i

incalculable-midnight-8291

02/18/2022, 1:39 PM
Thanks for the ping back @bored-barista-23480!
👍 1
b

bored-barista-23480

02/18/2022, 6:56 PM
You're welcome. 😉
i

incalculable-midnight-8291

02/23/2022, 2:32 PM
@bored-barista-23480 I started to look into it and Im not sure how to implement it. Would you mind showing some code? Is it the
new k8s.helm.v3.Chart
that has dependsOn the cluster somehow? If so, how do I say ready? Is it something like this?
Copy code
const cluster = new eks.Cluster(clusterName, {...});

new k8s.helm.v3.Chart('ingress-nginx', {
    version: '4.0.17',
    chart: 'ingress-nginx',
    fetchOpts: {
        repo: '<https://kubernetes.github.io/ingress-nginx>',
    },
}, {
    dependsOn: [cluster],
});
9 Views