What's the best channel for self-hosting/home-lab ...
# general
s
What's the best channel for self-hosting/home-lab discussions? (aka, not AWS/GCP/Azure) I am using proxmox (via https://www.pulumi.com/registry/packages/proxmoxve/ thx @worried-energy-90920 🙌) and passing in this stack as a parent stack to a https://www.pulumi.com/registry/packages/kubernetes/ stack (so a dynamic provider k8s) The plan is to have a
dev
and a
live
cluster, as two different stacks, inside each of the pulumi programs (so 4 total stacks), but I only have 1 public IP, and so can only use https://www.pulumi.com/registry/packages/kubernetes-cert-manager/ once since I only have 1 domain (managed via gcp CloudDNS) I think there is a catch-22-situation that I don't know how to reason about yet(?) I want a valid certificate for both the dev cluster and the live cluster. Is there a way to tell the
live
stack to point its
<https://dev.mydomain.com>
to what would be the dev cluster sitting on the same home lab? The idea being to have both
<https://myk8sapp.dev.mydomain.com>
and
<https://myk8sapp.mydomain.com>
both working. It feels like this will only be possible if I cert-manage letsenrypt on the
dev
cluster for the
live
cluster...(?)
w
hi, the cert-manager should be fine. i actually do similar things with one cluster being at home and one in the cloud but running on the same domain. 😊 i host my domain in google cloud dns. for cert-manager i use dns based validation which works flawlessly with the right GCP credentials. i wouldn‘t even try http based validation here! what i do and would suggest here is to create a new dns zone for your dev domain and do a sub-delegation of it. then the cert-manager in the dev cluster only accesses this sub-delegated zone. that‘s a bit more secure i believe but since cert-manager deletes the validation entry after successful issuance you could actually issue the same domain name in multiple clusters. 😄 (happened to me and it worked!) the more challenging part is definitely the 1 public IP here. you can‘t really distinguish to which cluster in your homelab a request has to be forwarded to since your router will most likely only port forward to one of the clusters, or actually only the load balanced service in it (eg a traefik ingress controller)… a very very very stupid idea though: don‘t issue certs for the dev cluster, configure external name services and endpoints and ingress resources in the live cluster for the dev services and kinda proxy all requests to dev through live. 🙈 😆 you could however throw a multi-cluster service mesh into the game… then dev is discoverable by live and vice-versa without that „hack“ of external names, etc… but still you‘d only have one entry point which is either the dev or live cluster. at least that‘s my quick-thought take on it 🤷🏻‍♂️ oh. and thanks for the mentioning - highly appreciated and happy to hear the provider is used 😁
s
Great food for thought, thanks, @worried-energy-90920! I need to chew on this more, since the intention is to have the
dev
cluster "match" the
live
home-lab cluster as much as possible. I do have a second ISP plugged into the home, so technically, I could get a second domain name to A record to this changing public IP Or VPS traffic out from
dev
to some dummy cloud machine with a static public IP, which moves away from "self"-hosting, though maybe a pragmatic compromise.
w
yeah, so if you set a tunnel from a cloud machine, you‘ll definitely not have any major challenges although I agree that this defies the purpose of a full „homelab“. 😅 so the second ISP fits that purpose better and is definitely a solution as long as you can make that dynamic IP working to let your domains always point to the right one. personally, i believe one should make use of the technologies available if they fit the purpose and make your life easier… that‘s also why i have a honelab and a public cluster for different kind of availabilities needed for the apps hosted. 😄 btw. out of curiosity. how do you manage your 2 ISPs? are you combining them for better bandwidth or is one your backup? what router are you using for that? 😀
s
Re: 2 ISPs I got Multipath TCP working at some point, found these helpful (mptcp v1, not v0 - very confusing) https://www.tessares.net/mptcp-proxy-setup-at-home-easier-now/ and https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configu[…]arted-with-multipath-tcp_configuring-and-managing-networking (but in practice, my wife uses the "slower" connection which is more stable for her wfh, and I get to tinker on the faster more unreliable connection 😅) Right now it's just MetalLB on L2, but was looking into: https://metallb.universe.tf/configuration/_advanced_bgp_configuration/ and https://www.althea.net/ for BGP load balancing, for when I get a 2nd OpenWRT-based router, I now have an older version of https://www.turris.com/en/products/omnia/ Here's the IaC code I hacked around to get a DHCP-like provisioning of IPs for your ProxmoxVE provider to use (because initialization via dhcp kept assigning the same IP to different MAC addresses, I guess pve or the router or both were getting overwhelmed?): https://gitlab.com/deposition.cloud/infra/compute/nebula/-/blob/main/src/router/lease.ts?ref_type=heads#L62 Basically, a dynamic Pulumi provider for OpenWRT static leases 🙂 - I know it's dirty, but it works. Had tried your approach to wrap a tf-provider, but got stuck on this: https://github.com/joneshf/terraform-provider-openwrt/issues/136
107 Views