I'm working on locking access to map IAM roles to ...
# general
e
I'm working on locking access to map IAM roles to specific namespaces within my EKS cluster. The role mapping will change frequently, whereas the cluster infra will not. E.g. A single EKS cluster will serve as a sandbox for dozens of teams. We will create and destroy namespaces often as teams reserve some of the cluster for testing or deployment. Each team will be granted an IAM Role that will map via the
aws-auth
configmap to a Kubernetes user that is limited to actions within a specific namespace. Hence, the aws-auth configmap will be updated often and quickly move out of sync with the initial definition for it in when we
new eks.Cluster
. Can you recommend a Pulumi way to manage the namespaces, IAM roles, kubernetes roles, rolebindings & usernames?
E.g. What could the stack/project structure look like?
c
@early-musician-41645 IN FACT WE CAN!
it’s a WIP, we’re working on building up the stuff to the EKS layer.
cc @breezy-hamburger-69619
e
ah, thanks!
Hmm, I think I'm looking for something slightly different. More of guidance on how to architect Pulumi stacks for managing a set of permissions across namespaces and mapping those to roles.
Something along the lines of this, except scaled to dozens of namespaces, with many IAM roles to match. I'd like to separate the EKS cluster management from the IAM role & kube role management. I thought this could work as a separate stack for the namespace/env/role mappings, but it would modify the
aws-auth
config map that is already created in the eks-cluster stack (via
new eks.Cluster
)
So there would be a problem if we ever want to update the eks-cluster stack after having made multiple changes to the aws-auth map...
b
Thanks for the feedback! We’re building out some material that hopes to talks to various topics such as the one you’re discussing, and we’re always interested to hear about different use cases. IIUC your root issue is that because
aws-auth
changes often from the initial one created, this in turn will cause the cluster to be rebuilt on future updates, and you want to avoid this, right? If so, then I don’t see anything wrong about your approach to map many IAM roles to roleMappings per namespace. The only thing to note is that because the
aws-auth
is the only
ConfigMap
that in EKS that maps IAM -> RBAC, access to it should only be limited to admins. Your issue, as I understand it, would ultimately be due to Pulumi’s delete-before-replace semantics, which triggers a cascading delete on AWS resources for EKS. Issues [1] and [2] describes this issue in more detail. Thankfully, PR [3] to
@pulumi/pulumi
resolves it, and
@pulumi/eks
will be updating its dependency on
@pulumi/pulumi
[4], so we should have a fix out for this soon. [1] - https://github.com/pulumi/pulumi-eks/issues/46 [2] - https://github.com/pulumi/pulumi/issues/2167 [3] - https://github.com/pulumi/pulumi/pull/2369 [4] - https://github.com/pulumi/pulumi-eks/issues/46#issuecomment-459544080 Hope this helps
e
👍