stale-tomato-37875
02/14/2025, 5:33 AM@pulumi/eks
version from v2.2.1 to v2.8.1.
I keep receiving errors like below
kubernetes:core/v1:ConfigMap (brainfish-prod-nodeAccess):
error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
I found similar issue from 5 years ago in pulumi github issues so I feel the root cause should be different.
I struggle to debug this issue as the error isn't indicative.
The aws eks cluster is currently ConfigMap only auth mode, will you suggest I switch the auth mode to API before upgrading?
any hints are appreciatedquick-house-41860
02/14/2025, 9:52 AMaws eks update-kubeconfig ...
to configure the kubeconfig, followed by something like kubectl get nodes
to confirm you can access the cluster.
If that works you could compare the kubeconfig the aws cli generates with the one the provider generates (it should be an output on the cluster component).stale-tomato-37875
02/14/2025, 10:12 AMquick-house-41860
02/14/2025, 10:19 AMprovider
output? Or are you creating a provider manually by using the kubeconfig
output?quick-house-41860
02/14/2025, 10:20 AMgetKubeconfig
method instead to set the profileName
to usestale-tomato-37875
02/14/2025, 10:25 AMstale-tomato-37875
02/14/2025, 10:26 AMstale-tomato-37875
02/14/2025, 10:28 AMimport * as aws from "@pulumi/aws";
import * as eks from "@pulumi/eks";
import * as pulumi from "@pulumi/pulumi";
import { ORG } from "./constants";
import {
stack,
defaultVpcSubnetsIds,
defaultSecurityGroupId,
defaultVpcId,
} from "./stackRef";
import * as config from "./config";
const securityGroup = aws.ec2.SecurityGroup.get(
"default",
defaultSecurityGroupId
);
const awsAccountId = aws
.getCallerIdentity()
.then((current) => current.accountId);
const eksMasterRole = new aws.iam.Role(`${ORG}-eks-${stack}-master-role`, {
assumeRolePolicy: pulumi.interpolate`{
"Version": "2012-10-17",
"Statement":[
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::${awsAccountId}:root"
},
"Action": "sts:AssumeRole"
}
]
}
`,
});
// TODO: Remove this line when deploying to AWS
if (stack == "dev") {
process.env.AWS_PROFILE = "brainfish-dev"; // Make it compatible with local deployment
}
if (stack == "prod" || stack == "eu" || stack == "au") {
process.env.AWS_PROFILE = "brainfish-prod"; // Make it compatible with local deployment
}
const cluster = new eks.Cluster(`${ORG}-${stack}`, {
vpcId: defaultVpcId,
subnetIds: defaultVpcSubnetsIds,
deployDashboard: false,
nodeGroupOptions: {
minSize: config.EKS_MIN_WORKER_NODE_NUMBER,
maxSize: config.EKS_MAX_WORKER_NODE_NUMBER,
desiredCapacity: config.EKS_DESIRED_WORKER_NODE_NUMBER,
nodeRootVolumeEncrypted: true,
amiId: config.EKS_NODE_AMI_ID, // pin Amazon EKS-optimized Amazon Linux 2 AMI to avoid accidental nodes destruction
nodeRootVolumeSize: 100, // 100GB, reasonable default
extraNodeSecurityGroups: [securityGroup],
instanceType: config.EKS_NODE_INSTANCE_TYPE,
nodeRootVolumeType: config.EKS_NODE_ROOT_VOLUME_TYPE as
| "standard"
| "gp2"
| "gp3"
| "st1"
| "sc1"
| "io1",
},
roleMappings: [
// Provides full administrator cluster access to the k8s cluster
{
groups: ["system:masters"],
roleArn: eksMasterRole.arn,
username: "pulumi:master-role-user", // not used but required
},
],
});
if (stack == "prod" || stack == "dev" || stack == "eu" || stack == "au") {
process.env.AWS_PROFILE = ""; // Reset the AWS_PROFILE
}
export const clusterKubeconfigOrigin = pulumi.secret(cluster.kubeconfig);
export const clusterKubeconfig = pulumi.secret(
cluster.getKubeconfig({
roleArn: eksMasterRole.arn,
})
);
// This is the security group created by @pulumi/eks (Pulumi AWS Crosswalk) for the self-managed node groups; this is also added to the AWS EKS cluster's additional security groups field.
export const clusterNodeSecurityGroupId = cluster.nodeSecurityGroup.id;
// This is the security group that is created by AWS by default for all new EKS clusters. This is an EKS created security group that applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.
export const clusterSecurityGroupId = cluster.clusterSecurityGroup.id;
I encounter this issue with the pulumi script abovestale-tomato-37875
02/14/2025, 10:31 AMgetKubeconfig
method instead to set the profileName
to use"quick-house-41860
02/14/2025, 11:20 AMgetKubeconfig
that you can use to generate a kubeconfig instead of using the one from the output.
It allows you to set an aws profile name to be included.
Using that you should be able to generate a kubeconfig that looks like the old v2.2.1 onequick-house-41860
02/14/2025, 11:22 AMstale-tomato-37875
02/14/2025, 11:23 AMquick-house-41860
02/14/2025, 11:25 AMprofileName
argument like so I think:
cluster.getKubeconfig({
roleArn: eksMasterRole.arn,
profileName: "..."
})
stale-tomato-37875
02/14/2025, 11:28 AMstale-tomato-37875
02/14/2025, 11:29 AMquick-house-41860
02/14/2025, 11:30 AMstale-tomato-37875
02/14/2025, 11:41 AM@pulumi/eks
generates an aws eks and configmap by default (this was preferred by aws in history)
2. That configmap includes the iam role used to generate the cluster and grants that iam role permissions related to system:masters
3. Therefore following up update against that cluster can succeed because of this
4. If there is any change against the configmap (in the current case, the upgrade of the eks behaviour removes some fields like profile-name in the configmap), the change will fail
5. This is because configmap only allows a very specific iam role combination, the change of the default provider due to upgrade of the eks package will fail the authentication
6. This become the deadlock of the upgradequick-house-41860
02/14/2025, 11:45 AMquick-house-41860
02/14/2025, 11:46 AMstale-tomato-37875
02/14/2025, 11:48 AMstale-tomato-37875
02/14/2025, 11:49 AMstale-tomato-37875
02/14/2025, 11:49 AMstale-tomato-37875
02/14/2025, 11:49 AMstale-tomato-37875
02/14/2025, 11:49 AMquick-house-41860
02/14/2025, 11:54 AMAPI
access mode improves this situation a lot
That diff is expected (that's the profile change), but this is the "user facing" k8s provider. You should see another k8s provider in your preview that's called brainfish-prod-eks-k8s
. That's the one used for the auth config map. Do you see any changes for that one?stale-tomato-37875
02/14/2025, 12:00 PMstale-tomato-37875
02/14/2025, 12:01 PMstale-tomato-37875
02/14/2025, 12:01 PMstale-tomato-37875
02/14/2025, 12:05 PMquick-house-41860
02/14/2025, 12:51 PM