if you modify it via the roleMappings available in...
# general
o
if you modify it via the roleMappings available in the eks.Cluster creation it updates the cluster and all kinds of dependent objects that will cause breaking changes
w
cc @microscopic-florist-22719 for thoughts on this. We were talking about this general space yesterday. I haven't yet had a chance to understand whether this is a fundamental EKS thing, or something about how
@pulumi/eks
structures the use of
roleMappings
.
o
man it's really bad. I'm realizing how good GKE is now
w
🙂
o
I'm going to have to figure out how to dynamically build and apply that configmap
w
I think there is a chance that this is actually something we'll be able to improve in the
@pulumi/eks
package - as I don't think this is actualy baked into the core
aws.eks.Cluster
itself, but is a manually-managed layer on top.
o
ok
happy to work with anybody or get thoughts from the community on how they are handling this
looks like it changes the output of the provider object and many things depend on that so it wants to update them
m
@orange-tailor-85423 can you elaborate a bit more on what you're seeing? Assuming you're using
@pulumi/eks
, changing role mappings should only affect the node access configmap. Everything else should remain the same.
o
so here's the roleMapping section from when the cluster gets stood up
roleMappings: [ { roleArn: identityStack.getOutput("infrastructureManagementRoleArn"), username: identityStack.getOutput("infrastructureManagementRoleArn"), groups: ["system:masters"] }, { roleArn: identityStack.getOutput("applicationManagementRoleArn"), username: identityStack.getOutput("applicationManagementRoleArn"), groups: ["system:masters"] },
should I not be using the roleMappings to work with mapping developer/admin/dba to AWS IAM?
subsequent objects leverage the cluster.provider:
so when the cluster changes, things that use that provider get replaced
I guess fundamentally after understanding IAM/RBAC in GCP I'm back to square one with AWS/EKS
m
so when the cluster changes, things that use that provider get replaced
This is the part that surprises me
The provider itself shouldn't be affected by the changes to the role mappings
Do you have an example diff?
(i.e. the output from a
pulumi preview
)
o
sure
tearing down my cluster and rebuilding - will follow up
m
Also, what does
pulumi version
report?
o
will build it "as-is" - then add in my changes
m
thanks! 🙂
o
sorry for all the hassle
m
no worries--making EKS clusters easier to manage is something we care pretty deeply about
o
@microscopic-florist-22719 so I have my cluster- builds fine
has some roleArn in roleMappings and some userMappings
update the roleMappings section with a new mapping:
{ roleArn: "arnawsiam:630000000674role/k8s.mb.developer", username: "k8s.mb.developer:{{AccountID}}{{SessionName}}", groups: ["k8s.mb.developer"] }
run preview
❯ pulumi preview Previewing update (MINDBODY-Platform/eks-casey-robertson): Type Name Plan Info pulumipulumiStack eks-eks-casey-robertson >- ├─ pulumipulumiStackReference identityStack read ├─ kuberneteshelm.shChart kube-state-metrics +- │ ├─ kubernetesextensionsDeployment kube-state-metrics replace [diff: ~provider] +- │ ├─ kubernetescoreService kube-state-metrics replace [diff: ~provider] +- │ ├─ kubernetes:rbac.authorization.k8s.io:ClusterRole kube-state-metrics replace [diff: ~provider] +- │ ├─ kubernetes:rbac.authorization.k8s.io:ClusterRoleBinding kube-state-metrics replace [diff: ~provider] +- │ └─ kubernetescoreServiceAccount kube-state-metrics replace [diff: ~provider] +- ├─ kubernetescoreNamespace dev replace [diff: ~provider] └─ eksindexCluster cluster +- ├─ pulumiproviderskubernetes cluster-provider replace [diff: ~kubeconfig] +- ├─ awscloudformationStack cluster-nodes replace +- └─ kubernetescoreConfigMap cluster-nodeAccess replace [diff: ~data] Resources: +-9 to replace 24 unchanged
since the provider updates all these other things want to get replaced
that's not good
w
Opened https://github.com/pulumi/pulumi-eks/issues/46 to track this - I expect this is something we can improve.