Hi folks! I'm trying to deploy a setup to an EKS c...
# aws
b
Hi folks! I'm trying to deploy a setup to an EKS cluster. A service is made public via an Application Load Balancer set up by the Load Balancer Controller in my setup. But
pulumi destroy
never works, it always hangs on the Ingress resource. Manually deleting the finalizer set on the Ingress by the LBC while
destroy
is ongoing works. But even when I explicitly make the Ingress depend on the Helm chart of the LBC the LBC pods get destroyed before the Ingress, and most important: before the finalizer was removed from the Ingress by the LBC. So the destroy-operation always needs manual intervention. Does anyone have an idea what causes this behavior or even how to solve it?
i
Same here. It prob happens because the lb that gets created from the service isnt visible for pulumi. So it tears down the eks cluster, because it doenst seem to be any reason to uninstall everything first. But that leaves the lb, so that the vpc isnt empty … and yeah. Messed up state. What we did is that we separated it so that we only setup aws infra with pulumi, and roll out ingress controller etc with helm afterwards, then custom script to destroy that uninstalls the ingress controller first. I would love to set it up so that we precreate the lb ourselves and tie the ingress controller to that lb with service type nodeport instead of type loadbalancer.
b
But shouldn't the LBC on deletion via Kubernetes wait until the LB was deleted, remove the finalizer from the Ingress and THEN shut down? I wonder if it has something to do with a SIGKILL beeing initiated by Kubernetes/Docker because the shutdown of LBC takes to long after a SIGTERM. And maybe the LBC couldn't completed a graceful termination.
I also thought of writing a weird dynamic resource that calls the Kubernetes API and removes the finalizer on a
destroy
operation and using some clever explicitly set dependsOn values.
@incalculable-midnight-8291 I found the solution - at least for my setup. Maybe it could help you too. Helm charts are of type
ComponentResource
and thus using them as
dependsOn
values has no effect (seems to be something the Pulumi team is working on to solve). That's why the LBC terminates without having the Ingress and AWS resources, and thus the finalizer, removed. The solution is not to pass
[helmChart]
as a dependency but
helmChart.ready
.
👍 1
i
Thanks for the ping back @bored-barista-23480!
👍 1
b
You're welcome. 😉
i
@bored-barista-23480 I started to look into it and Im not sure how to implement it. Would you mind showing some code? Is it the
new k8s.helm.v3.Chart
that has dependsOn the cluster somehow? If so, how do I say ready? Is it something like this?
Copy code
const cluster = new eks.Cluster(clusterName, {...});

new k8s.helm.v3.Chart('ingress-nginx', {
    version: '4.0.17',
    chart: 'ingress-nginx',
    fetchOpts: {
        repo: '<https://kubernetes.github.io/ingress-nginx>',
    },
}, {
    dependsOn: [cluster],
});