From the <looks of it>, it seems like the k8s oper...
# pulumi-kubernetes-operator
p
From the looks of it, it seems like the k8s operator needs to be deployed into every namespace where we expect
Stacks
and
Programs
to be deployed. Is there a way to have one deployment of the operator that monitors all the cluster namespaces for
Stacks
and
Programs
? Seems a waste of resources to have to deploy it everywhere. Or am I missing something?
g
https://github.com/pulumi/home/issues/2330 provides more context on this behavior. and https://github.com/pulumi/pulumi-kubernetes-operator/pull/329 implemented this change, which was released in v1.10.0.
p
Thanks @gorgeous-egg-16927! First link is a 404 for me. We discovered the env var in #329 by looking at the code just this morning after my post 🙂 I couldn’t find any reference to that feature in documentation.
g
Whoops, didn’t notice that was internal. The quick summary is that a single operator that can reach across namespaces is a security risk because it can access Secrets and lead to privilege escalation. You would have to set up the appropriate cluster roles to make it work in the first place, but we wanted to avoid pushing people down that path.
p
Yeah - we’re concerned about the security implications of that as well. Our use case is for the Pulumi operator to be concerned only with creating AWS resources that applications need. So what we were thinking is that we could have the operator run in a
pulumi-operator
namespace, and use an IAM service-account linked role bind to the operator SA. The IAM role would have an AWS
AdministratorAccess
IAM policy attached to it. We’d then back that up with something like Gatekeeper or Kyverno to control what kinds of things `Program`s or `Stack`s could do with a validating admission policy. Then people would deploy programs or stacks in any namespace, and the operator would action them. One question though: Does the operator that’s running the
Program
use the role that is linked to the SA? In other words, would we still need to provide an AWS Access Key and secret key for stacks to deploy properly?
g
I’ll defer to @eager-football-6317 on that since he’s got more recent context the operator.
e
Stacks are run in the context of the operator’s service account, so I would expect them to pick up service account linked roles. Programs are handled by the same process.
We’d then back that up with something like Gatekeeper or Kyverno to control what kinds of things `Program`s or `Stack`s could do with a validating admission policy.
I think you will find this difficult, in general. You might be able to constrain which git repos Stacks can use. But you’d also have to control what goes in those repos.
p
Thanks @eager-football-6317: We were planning to use the recently-announced inline YAML rather than pulling from a Git repo. This would allow us, I believe, to write OPA or Kyverno policies for Program or Stack resources, since they’d be in the manifests themselves, I think? Regardless, we’re putting the work on the Pulumi Operator down for a bit. Needing to deploy it in every namespace is too cumbersome for our use case. As much as I’d prefer Pulumi, Crossplane and ACK don’t have this limitation.
e
Understood, on the namespace issue. Maybe at some point we can make it more amenable to this kind of deployment, by locking the operator down to only run Programs (and certain providers). It’s a tricky balance when the whole selling point is being able to write programs that create arbitrary resources ! :-S