Hello all, I’m trying to deploy a k8s cluster usin...
# aws
g
Hello all, I’m trying to deploy a k8s cluster using the AWS env vars set of my root account pulumi bucket but have
aws:profile
to our test sub aws account. If I deploy w/o the profile to the creds within env vars it works but I’m getting authorisation failed from k8s errors within the GitHub action I’m running, the other AWS specific stacks are deploying correctly but stops at k8s. Any advice? I’m seeing this a lot within github issues but not really a suitable solution. Errors within thread.
Copy code
Diagnostics:
  
    kubernetes:core/v1:Namespace (cluster-svcs):
  
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
  
  
    kubernetes:core/v1:Namespace (app-svcs):
  
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
  
  
    kubernetes:<http://storage.k8s.io/v1:StorageClass|storage.k8s.io/v1:StorageClass> (cluster-gp2-encrypted):
  
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
  
  
    kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> (privileged):
  
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
  
  
    kubernetes:core/v1:Namespace (ingress-nginx):
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
    eks:index:VpcCni (cluster-vpc-cni):
      error: Command failed: kubectl apply -f /tmp/tmp-1881iwETQCX8mFB1.tmp
      error: You must be logged in to the server (the server has asked for the client to provide credentials)
  
    pulumi:pulumi:Stack (cluster-dev):
      error: update failed
  
      error: You must be logged in to the server (the server has asked for the client to provide credentials)
  
    kubernetes:core/v1:ConfigMap (cluster-nodeAccess):
  
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
    kubernetes:<http://storage.k8s.io/v1:StorageClass|storage.k8s.io/v1:StorageClass> (cluster-sc1):
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
    kubernetes:policy/v1beta1:PodSecurityPolicy (restrictive):
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
  
    kubernetes:core/v1:Namespace (apps):
      error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials
b
Can you share your code? You likely need to add profile credentials opts to your cluster code
Just the code for the cluster is fine
g
I just found that. Just pushed the following:
Copy code
// Create an AWS provider instance.
const awsProvider = new aws.Provider(`${projectName}-aws`, {
    region: aws.config.region,
    profile: aws.config.profile,
});

// Create an EKS cluster.
const cluster = new eks.Cluster(`${projectName}`, {
    providerCredentialOpts: {
        profileName: aws.config.profile,
    },
    instanceRoles: [
        aws.iam.Role.get("adminsIamRole", stdNodegroupIamRoleName),
        aws.iam.Role.get("devsIamRole", perfNodegroupIamRoleName),
    ],
    roleMappings: [
        {
            roleArn: config.adminsIamRoleArn,
            groups: ["system:masters"],
            username: "pulumi:admins",
        },
        {
            roleArn: config.devsIamRoleArn,
            groups: ["pulumi:devs"],
            username: "pulumi:alice",
        },
    ],
    vpcId: config.vpcId,
    publicSubnetIds: config.publicSubnetIds,
    privateSubnetIds: config.privateSubnetIds,
    storageClasses: {
        "gp2-encrypted": { type: "gp2", encrypted: true},
        "sc1": { type: "sc1"}
    },
    nodeAssociatePublicIpAddress: false,
    skipDefaultNodeGroup: true,
    deployDashboard: false,
    version: "1.24",
    tags: {
        "Project": "k8s-aws-cluster",
        "Org": "pulumi",
    },
    clusterSecurityGroupTags: { "ClusterSecurityGroupTag": "true" },
    nodeSecurityGroupTags: { "NodeSecurityGroupTag": "true" },
    enabledClusterLogTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"],
    // endpointPublicAccess: false,     // Requires bastion to access cluster API endpoint
    // endpointPrivateAccess: true,     // Requires bastion to access cluster API endpoint
}, {
    provider: awsProvider,
});
b
That should do it
g
Yeah took a quite a bit of hunting till I found the pull request, will there be a change soon to expect the provider config within the stack to also be adhered to by k8s?
Or is this by design?
b
this is by design. What that does is add the required
AWS_PROFILE
to the generated kubeconfig
g
I’m still facing failure unfortunately, I see the cluster made within EKS. I’m going to quickly drop a gist here and the logs from GH Actions.
just added the logs to that gist too
b
Okay this is because your profile doesn’t exist in GitHub actions
So you’ve set a profile locally but you need the profile to exist in actions too
g
Ah, this is set as a step
1s
Copy code
# Add new AWS profile from secrets named development
      - name: Add developmemt AWS profile from secrets 🔑
        run: |
          aws configure set aws_access_key_id ${{ secrets.DEVELOPMENT_AWS_ACCESS_KEY_ID }} --profile development
          aws configure set aws_secret_access_key ${{ secrets.DEVELOPMENT_AWS_SECRET_ACCESS_KEY }} --profile development
          aws configure set region eu-west-2 --profile development
          aws configure set output json --profile development
I have two other pulumi configs deploying to development correctly
So I am confident the profile is there
I’m going to try locally and spit out the kubeconfig
The
eks:index:Cluster
gets created at least 😩
I saw something about someone using a roleArn instead in one of the GitHub issues, I’m going to try that
Okay! Fixed it! I created a default profile instead of using env vars to refer to the root account. Now that I had no aws env vars I was able to deploy.