When I use k8s to create an ingress controller on ...
# general
h
When I use k8s to create an ingress controller on GCP, what is the easiest way to get the instance to that loadbalancer to do additional configuration on its settings/healthchecks/etc?
c
@helpful-advantage-49286 hmm, you mean that you want to configure the load balancer that underlies the
Ingress
itself?
h
Yeah, I want to change some of the healthchecks, and tweak some other stuff that isn’t exposed via k8s
c
@helpful-advantage-49286 so this is a pretty common use case but in pulumi if you want to manage a resource, you have to create it yourself.
I actually typically recommend that people don’t use
Ingress
if they can avoid it. If you’re already married to GCP, I personally would just use the pod-aware load balancing and be done with it. This is the sort of thing that’s super easy in PUlumi — to me the compelling use case for
Ingress
is when you’re not using pulumi, where this kind of thing is super hard.
h
Interesting, so you would suggest just creating the loadbalancer manually and pointing it at the nodeport?
Any docs on how I can do that easily?
c
hmm
h
(this is 3 new things to me, k8s, gcp and pulumi)
c
let me try to find some
h
I can probably figure out the pulumi bits if you can point me at good gcp docs for this!
in case it is not clear, this LB will route traffic directly to your pods. normally the LB routes traffic to a pod, and then that pod sends the traffic to the actual pod.
I’m not 100% sure how to boot the underlying LB without a service, but I’ve heard about people who did it
h
hrm, this still has you using an Ingress
c
I think that’s just one option.
I think you can manually create network endpoint groups, could be wrong though as I’ve never done it myself
note though, this will be at a minimum quite a bit more work, if you do get it working.
when I’m in on tuesday I can try to help out more — got to go get stuff so I can make dinner now though
h
No rush! I have what I need working now, just decided to serve something at / that is valid to workaround the issue temporarily!