02/18/2021, 8:02 AM
I've been working on setting up one-click cluster provisioning automation so I can spin up clusters on demand (be it for test environments or disaster recovery). So far I'm pretty happy with everything but one thing - the ingress controller(s). - Do you install the ingress controller as a part of your initial cluster deployment? E.g. pulumi up installs some helm charts for the controller and thats it? - How do you handle ingress updates? This sounds like a potentially dangerous operation that should ideally be tested manually - Upgrading the ingress might work but .. - Installing the target version on a fresh cluster might fail or not work as expected - and especially vice versa: new installs are good, but helm upgrade breaks something (CRDs can be a pain) Currently I have a manual step that I do for cluster initialization. pulumi output (ingress IP, namespace name etc) is passed to the helm chart script that performs the initial ingress installation and this works pretty great. The problem is, that helm is not declarative and that on upgrade there are sometimes breaking changes (eg CRDs have to be manually applied). E.g. end up having to do perform these manual steps to ensure there's no downtime. It's literally easier to spin up a new cluster and gradually switch traffic over tbh.