This message was deleted.
# general
s
This message was deleted.
c
Kubernetes implements a 3-way “strategic” merge between the current API object you are providing, the current live object inside Kubernetes, and the previous version of the API object.
In order to implement this, clients like
kubectl
require you to embed the previous version of each API object inside the
.metadata.annotations
. Instead of doing this, we just use a state file.
This is necessary in particular to be compatible with (e.g.) admission controllers. The idea being that if an AC adds a container or something out-of-band, then your update won’t nuke it when you push.
So another way of looking at this, @straight-cartoon-74589, is that Kubernetes already has a notion of a state file, but that state file is implicit in the storage of the API object itself.
Now, should it get out of sync, just run
pulumi refresh
I suppose you could make the argument that it’s harder to get out of sync by storing the previous version of the state in
.metadata.annotations
, but my counter-argument is that I think your deployments should be managed by your CI/CD system, which ideally is just shelling out to (e.g.) pulumi
make sense?
s
Yeah, we'll likely have each stack deployed by a seperate build.
but managing a local state file is a tiny bit tricky. And sending that to cloud is likely a non-starter.
c
That’s good feedback.
s
so would running
pulumi refresh
at start of each build let me essentially ignore state altogether
c
That would update your last snapshot of the live object, but you’d still need the last input.
For k8s we could definitely store the snapshots in a CRD/ConfigMap/whatever.
this has other limitations, specifically that you would end up with secrets in there probably.
same story with helm
Basically, the only hard constraint the k8s stuff should IMHO have is: it must use the k8s API itself. If users have to step out of band and install some god-mode-fake-api-server, like Tiller, I view the project as having failed. 🙂
😂 2
Because then you jettison things like RBAC.
s
^^ +1000
c
So I guess what I’m saying is, I’m super open to other ideas, and it’s not clear to me which one we shoudl pick.
I think storing things in the annotations is not a terrible idea.
Probably the best.
s
I guess I'll have a better idea what my concerns are after I actually get a PoC running and see what it stores.
c
cool.
s
I'm just used to a lot of tools that do a "get the state, reconcile the state" workflow directly against the backend.
c
I’ll file an issue about considering using annotations for last inputs, which I believe should mitigate nearly all problems with state files.
Pulumi should be the same model…
Pulumi is basically a human-powered k8s controller.
If the state is different, just reconcile.
s
Sounds good. Thanks for the good info!
c
np.
please do tell us what you think even if you think it sucks.
s
Well from what I've seen already, I'm ready to abandon my homegrown
helm template
+ custom yaml manifest glue solution altogether.
👍 1
c
that I am especially interested in.
that is what the
k8s.v2.helm.Chart
API is meant to replace.
g
helm is really crap, very unfortunate that it ever became so popular imho.
c
lol ouch
g
k8s starters always initially get caught in the helm trap, just by the fact that it’s so popular. But then when you start using it beyond some hello-world, let’s-try-out-this-software-real-quick scenario you really get lost in it’s complexities. Often times I just end up do an initial rendering of the templates and then checkin the raw k8s yamls instead.
c
Interesting, what are you referring to when you say “complexity”?
Just managing go templates?
g
well for starters, securing tiller sounds like super complex imho. Especially when you started with k8s. Other things are: Repo/dependency management is very implicit (helm repo update, helm repo add etc.). It’d like to have some package.json-like file to explicitly lock dependencies. Figuring out how to do idempotent helm deploys
helm upgrade jenkins stable/jenkins --install -f values.yaml
does the job, but you have to find that out first, why is there not a
helm apply
or so? Another thing I noticed yesterday: I wanted to preview rentered templates. In order to do so you can’t use --dry-run or some parameter to install / upgrade, but instead you first have to
helm fetch
and then
helm template
. And yes some of the yaml jinja2 templates in charts look very complex, although I fortunately never had to author any.
c
Yeah, IMHO the biggest problem with Helm is that Tiller is a completely parallel fake-api-server
You can’t do anything you would use the API server for… stuff like RBAC is not re-implemented in Tiller.
So you just have this gRPC endpoint that is the god of your little cluster.
It seems very strange that this would be something you’d want, but I guess if you’re just installing apps, it probably works ok.
g
I read the other day also a good blog post about all the design issues in helm and I commented on it to specifically complain about the terminology in helm 🙂 : https://medium.com/@c.theilemann/well-said-ab845004feba
c
our CLI experience is similarly disjointed IMHO, but yes, this confuses everyone.
helm upgrade
is what it is because Kubernetes has `Deployment`s
so that at least makes sense, kind of.
I’ve also had it up to here with cute nautical and greek names.
No more. Be your own hero, kids.
g
😄
i actually had today a helm moment, when i read about the term pulumi pearl 😛
h
Is there an example of using Azure as a Web Backend?