Hi everyone. I'm currently debating the merits of...
# kubernetes
b
Hi everyone. I'm currently debating the merits of managing K8s deployments in Pulumi vs. Helm. I already provision the K8s infrastructure (cluster, etc), but have not yet embraces making K8s API calls (to deploy, etc) from within Pulumi. Am I right in assuming that Pulumi can be a full replacement for Helm -- for example, it can compute the diffs in desired/deployed K8s resources and make the necessary changes (such as deleting a Service if you've deleted it's definition in the Pulumi code). I do like the idea of colocating infra provisioning with app deployment, but I do worry about the ability for a corruption in state file due to some bad/unlucky infra provisioning preventing progress in making application deployment changes inside K8s. Does anyone have thoughts on how to weigh the pros and cons here?
g
I maintain Pulumi’s k8s provider, so I’m biased, but I’d vote for managing k8s apps in Pulumi and skipping Helm if possible. You can make a Helm chart equivalent by encapsulating k8s resources in a ComponentResource. Pulumi will show diffs, manage upgrades, etc. as you’d expect. We also offer a Helm resource, so you can mix and match if you have existing Helm charts you want to use. As far as structuring your code, I’d suggest keeping app code in a separate stack from infrastructure code that won’t be touched as frequently. https://www.pulumi.com/docs/guides/crosswalk/kubernetes/apps/ has more information and examples of this.
b
thanks for the response!
d
After using Helm + Pulumi for about a year, we're moving away from Pulumi for anything Kubernetes related. Almost everything in the Kubernetes ecosystem can be deployed using a Helm chart - it is the overwhelming de facto standard for deployment and configuration of k8s resources. Simply wrapping the chart deployment in Pulumi does not work since Pulumi does not support Helm hooks. Many common charts like cert manager use hooks - they are also incredibly useful for your own deployments. We were also finding that our Pulumi configuration files were starting to look a lot like Helm values.yaml, except we had the added tax of writing all the code for deserialization to C#. Turns out yaml is kinda what you want when deploying Kubernetes. You also end up with two naming systems to worry about: Pulumi and Kubernetes. Multiple times we had failures due to Pulumi naming conflicts that had nothing to do with the k8s resources. The problem just doesn't exist in Helm.
b
@dazzling-alarm-77720 thanks for your input! The killer feature I'm looking for is basically type safety (and testability) but I can see a world in which the effort required to do so, as well as the potential of becoming unable to progress deployments due to state file issues, becomes "not worth it"
d
Helm validates the rendered templates against the Kubernetes OpenAPI spec, so it does provide "type safety" after a fashion, as well as linting for errors like not setting resource limits. It's a much better dev experience.
b
sure, but there's no type checking before rendering, which is where most of the pain lies -- bad values, bad indentation, bad results, etc
g
Mitch - you might also be interested in https://www.pulumi.com/kube2pulumi/ which makes it pretty easy to get started with strongly-typed resource definitions. It is often a bit more verbose than YAML, but I think the tradeoff is well worth it since you can start treating those resources as software components rather than basic config files.
d
@bright-sandwich-93783 indent bugs are no fun for sure. Real type safety is nice, but without hook support, it's really about core functionality vs. dev experience. If you want to leverage a community chart that uses hooks you're kinda stuck unless you reimplement it...
s
@dazzling-alarm-77720 we are also currently in the process of adding a blackbox helm release resource which might help address some your concerns. I am happy to set up some time next week to get your input.
b
I also agree with skipping helm wherever possible. On our side, we use pulumi for baseline components (databases, networking, security, monitoring, etc) - and I LOVE crd2pulumi (though wish there were a better way of keeping the CRDs up to date). It would be cool to see something like OLM find a pulumi counterpart as well - I don't need "over the air updates" and all that jazz - and I don't need hooks in helm (I can't stand how it monopolizes shared resources) - what I need is something that allows me to declaratively state the fundamental behavior of the cluster. And for that, pulumi is excellent. We also use pulumi for some deployments (migrating away from helm) since there are quite a few things that require some more complex calculations and logic to get the right deployment parameters - and it's awesome to be able to leverage a full programming language (and api calls) to do so at deploy-time.
I do really wish there was a bit more.. pulumi.. in the kubernetes module though. Right now I'm fully blocked by an issue where I can't scale to multiple clusters because it's not possible to deploy my resources to more than one cluster (i.e. executing the same component resource, with a different pulumi name, twice with different providers).
b
yeah, so far I've found the Pulumi k8s API pretty lacking. You can really only create resources. Importing resources is a huge pain because you need to get the Pulumi definition to line up perfectly with the existing resource definition, when you might not care about tracking all fields. For example, I just want/need to annotate an existing ServiceAccount in the kube-system namespace. Pulumi doesn't have a good way to manage this other than importing the full resource, when I really only want it to manage the annotation addition.
b
yeah - it's tricky when pulumi cares more than the provider does. Simple things like the typical problem of declaring a namespace that already exists is one of the first problems solved by most kubernetes deployment tools becomes unnecessarily complicated. Failed deployments that actually succeed means taking things out of CI/CD and running things manually - or manually deleting a resource just so that pulumi can deploy it (not just a kubernetes issue though, ref. my
pulumi adopt
semi-proposal).
With the ServiceAccount aspect - I do understand clearly that with kubernetes, the declaration is an "all or nothing" aspect - can't just tune a single parameter of an existing resource - but that's for apply commands. If we had the ability to use
patch
we could actually control just parts of a declaration.
b
Simple things like the typical problem of declaring a namespace that already exists is one of the first problems solved by most kubernetes deployment tools becomes unnecessarily complicated.
Exactly!!
I'm basically just going to shell out to kubektl 😞
b
not that kubectl doesn't also complain about declaring already existing namespaces.. but hey 😉
b
kubectl annotate is what i'd be using