This message was deleted.
# aws
s
This message was deleted.
s
here is the diff of that resource in the job that performed the
recreate
operation (I put placeholders for the AWS Account #s):
Copy code
eks:index:Cluster$kubernetes:core/v1:ConfigMap (kettleos-eks-dev-nodeAccess)
++ kubernetes:core/v1:ConfigMap (create-replacement)
    [id=kube-system/aws-auth]
    [urn=urn:pulumi:dev::base::eks:index:Cluster$kubernetes:core/v1:ConfigMap::kettleos-eks-dev-nodeAccess]
    __inputs           : {
        data      : {
            mapRoles: "- rolearn: 'arn:aws:iam::[AWSDevAccount]:role/kettleos-eks-dev-ng-role-4a11bab'
  username: 'system:node:{{EC2PrivateDNSName}}'
  groups:
    - 'system:bootstrappers'
    - 'system:nodes'
" => "- rolearn: arn:aws:iam::[AWSDevAccount]:role/kettleos-eks-dev-ng-role-4a11bab
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
"
        }
    }
    data               : {
        mapRoles: "- rolearn: 'arn:aws:iam::[AWSDevAccount]:role/kettleos-eks-dev-ng-role-4a11bab'
  username: 'system:node:{{EC2PrivateDNSName}}'
  groups:
    - 'system:bootstrappers'
    - 'system:nodes'
- rolearn: 'arn:aws:iam::[AWSDevAccount]:role/OrganizationAccountAccessRole'
  username: 'developer'
  groups:
    - 'system:masters'
- rolearn: 'arn:aws:iam::[AWSDevAccount]:role/OrganizationAccountReadOnlyRole'
  username: 'developer'
  groups:
    - 'system:masters'
" => "- rolearn: arn:aws:iam::[AWSDevAccount]:role/kettleos-eks-dev-ng-role-4a11bab
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes
"
    }
    metadata           : {
        annotations      : {
            <http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>: "{"apiVersion":"v1","data":{"mapRoles":"- rolearn: 'arn:aws:iam::[AWSDevAccount]:role/kettleos-eks-dev-ng-role-4a11bab'\n  username: 'system:node:{{EC2PrivateDNSName}}'\n  groups:\n    - 'system:bootstrappers'\n    - 'system:nodes'\n"},"kind":"ConfigMap","metadata":{"labels":{"<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>":"pulumi"},"name":"aws-auth","namespace":"kube-system"}}
" => "{"apiVersion":"v1","data":{"mapRoles":"- rolearn: arn:aws:iam::[AWSDevAccount]:role/kettleos-eks-dev-ng-role-4a11bab\n  username: system:node:{{EC2PrivateDNSName}}\n  groups:\n    - system:bootstrappers\n    - system:nodes\n"},"kind":"ConfigMap","metadata":{"labels":{"<http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>":"pulumi"},"name":"aws-auth","namespace":"kube-system"}}
"
        }
        creationTimestamp: "2021-04-18T04:32:06Z" => "2022-12-28T06:51:37Z"
        managedFields    : [
            [0]: {
                fieldsV1  : {
                    f:data    : {
                        .         : {}
                        f:mapRoles: {}
                    }
                }
                manager   : "pulumi-resource-kubernetes" => "pulumi-kubernetes"
                time      : "2021-04-18T04:32:06Z" => "2022-12-28T06:51:37Z"
            }
            [1]: {
                apiVersion: "v1"
                fieldsType: "FieldsV1"
                fieldsV1  : {
                    f:data: {
                        f:mapRoles: {}
                    }
                }
                manager   : "kubectl-edit"
                operation : "Update"
                time      : "2021-04-18T05:11:50Z"
            }
        ]
        resourceVersion  : "190307907" => "210893469"
        uid              : "5ea6eea4-d11e-4a72-969d-19ab38f9b590" => "16f0f6ce-e58b-44c9-8a3b-c58910640b21"
    }
Diagnostics:
m
Hi, Not sure if i misread the output , but what is the difference in
system:masters
in the old cm?
s
I can check but that part is 💯 managed by pulumi’s EKS module since that is the creator’s mapping that doesn’t get touched
m
hmm
s
oh no I was wrong about that part @many-telephone-49025
the
system:masters
entries are what we have to add manually after the cluster is created in order for people to access the k8s cluster either in the Web Console or via CLI without using the creator role
as mentioned above via the GH Issue, manually editing the
aws-auth
ConfigMap is the only way to accomplish this so a
recreate
really blows it apart
m
But edited in Pulumi and not on the cluster via k8s API? Sorry for asking maybe an obvious question
s
pulumi doesn’t expose it likely b/c EKS API doesn’t expose it (that’s the reason for that GH Issue) - it can’t be automated atm except through the
eksctl
tool afaik
so the only way to edit that ConfigMap is to login and manually edit the ConfigMap resource via kubectl
m
then I would run a pulumi refresh
to update the state with the changes in the cluster so they are in sync
s
I refreshed it
afaik it was refreshed b/c I had been running refresh jobs on every change I made
m
if you run now the refresh job it should be fine?
as they are now in sync again
s
the previous update was a refresh that succeeded
m
did you update the pulumi-kubernetes provider?
s
I’m checking
m
There was a change two weeks ago -> https://github.com/pulumi/pulumi-kubernetes/pull/2271 with the manager name
s
I saw that when I was looking into it, that’s why I pointed this out
m
You are not running in SSA mode?
s
no
we’re using the OIDC IAM Role
this infrastructure is 2 years old
the manager name shouldn’t trigger a recreate - I would expect it to trigger an update op
btw the
pulumi/kubernetes
package changed from
^3.22.1
to
^3.23.1
but more importantly I think was the changes to
pulumi/awsx
package. Here are all the dep changes:
Copy code
-        "@pulumi/aws": "^5.20.0",
-        "@pulumi/awsx": "^0.40.1",
-        "@pulumi/docker": "^3.6.0",
-        "@pulumi/kubernetes": "^3.22.1",
-        "@pulumi/pulumi": "^3.46.1",
+        "@pulumi/aws": "^5.25.0",
+        "@pulumi/awsx": "^1.0.1",
+        "@pulumi/docker": "^3.6.1",
+        "@pulumi/kubernetes": "^3.23.1",
+        "@pulumi/pulumi": "^3.50.2",
m
Oh yes
AWSX went GA with version 1.0.0
Let me check the recreate on managed fields quickly to rule out this one
👍 1
hmm no
no recreate when updating from 3.22.1 tro 3.23.1 on a configmap
s
thanks
m
I did an edit and a refresh before i updated
it made a update on a field but not a recreate of the whole CM
s
did you use the previous version of the package to create the initial EKS cluster?
m
I just used a local k8s to test only the update of the kubernetes provider
s
well I have to perform this on our production cluster at some point so you’re saying that not refreshing could be the problem
m
Would love to connect in a call with you to check before running in prod. So we could get more infos about the before state.
Tomorrow I am unfortunately OOO as it's bank holiday in Germany
s
That would be fantastic. I won’t run in prod until next week
m
Awesome! Feel free to send an invite to engin@pulumi.com with a time slot that fits you!
âś… 1