Hi, I created a `VPC` using `pulumi_aws` and a `cl...
# general
d
Hi, I created a
VPC
using
pulumi_aws
and a
cluster
using
pulumi_eks
, but in the end, I received the error
no nodes available to schedule pods.
Here is the code: https://github.com/omidraha/pulumi_example/blob/main/vpc.py https://github.com/omidraha/pulumi_example/blob/main/iam.py https://github.com/omidraha/pulumi_example/blob/main/cluster.py https://github.com/omidraha/pulumi_example/blob/main/setup.py
Copy code
$ kubectl get pods -A 
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-6ff9c46cd8-98sck   0/1     Pending   0          24h
kube-system   coredns-6ff9c46cd8-hrj56   0/1     Pending   0          24h
$ kubectl get event -A
NAMESPACE     LAST SEEN   TYPE      REASON             OBJECT                         MESSAGE
kube-system   38s         Warning   FailedScheduling   pod/coredns-6ff9c46cd8-98sck   no nodes available to schedule pods
kube-system   68s         Warning   FailedScheduling   pod/coredns-6ff9c46cd8-hrj56   no nodes available to schedule pods
t
It looks like you're missing the policies that are needed to allow k8s to interact with EC2
Copy code
const cluster_role = new aws.iam.Role('EKS-Cluster-Role', {
        name: 'EKS-Cluster-Role',
        assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({
            Service: '<http://eks.amazonaws.com|eks.amazonaws.com>',
        }),
    }, {
        provider: Provider(config.requireObject<AwsOrganizationConfiguration>('organization').name),
    });

    new aws.iam.RolePolicyAttachment('EKS-Cluster-Role-Policy (AmazonEKSClusterPolicy)', {
        policyArn: aws.iam.ManagedPolicy.AmazonEKSClusterPolicy,
        role: cluster_role.name,
    }, {
        parent: cluster_role,
        dependsOn: cluster_role,
        provider: Provider(config.requireObject<AwsOrganizationConfiguration>('organization').name),
    });

    const worker_role = new aws.iam.Role('EKS-Worker-Role', {
        name: 'EKS-Worker-Role',
        assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({
            Service: '<http://ec2.amazonaws.com|ec2.amazonaws.com>',
        }),
    }, {
        provider: Provider(config.requireObject<AwsOrganizationConfiguration>('organization').name),
    });

    new aws.iam.RolePolicyAttachment(`EKS-Worker-Role-policy (AmazonEKSWorkerNodePolicy)`, {
        policyArn: aws.iam.ManagedPolicy.AmazonEKSWorkerNodePolicy,
        role: worker_role.name,
    }, {
        parent: worker_role,
        dependsOn: worker_role,
        provider: Provider(config.requireObject<AwsOrganizationConfiguration>('organization').name),
    });

    new aws.iam.RolePolicyAttachment(`EKS-Worker-Role-policy (AmazonEKS_CNI_Policy)`, {
        policyArn: aws.iam.ManagedPolicy.AmazonEKS_CNI_Policy,
        role: worker_role.name,
    }, {
        parent: worker_role,
        dependsOn: worker_role,
        provider: Provider(config.requireObject<AwsOrganizationConfiguration>('organization').name),
    });

    new aws.iam.RolePolicyAttachment(`EKS-Worker-Role-policy (AmazonEC2ContainerRegistryReadOnly)`, {
        policyArn: aws.iam.ManagedPolicy.AmazonEC2ContainerRegistryReadOnly,
        role: worker_role.name,
    }, {
        parent: worker_role,
        dependsOn: worker_role,
        provider: Provider(config.requireObject<AwsOrganizationConfiguration>('organization').name),
    });
this is what we have in Typescript
Then on the cluster, we have
roleArn: cluster_role.arn,
and in aws-auth we have
Copy code
{
                    rolearn: worker_role.arn,
                    username: 'system:node:{{EC2PrivateDNSName}}',
                    groups: ['system:nodes', 'system:bootstrappers'],
                },
also
Though i think this might be the default in aws-auth
d
I add roles as
iam.py
and update
cluster.py
for
instance_role
and
role_mappings
but still get same error
no nodes available to schedule pods
.
t
If you look in EC2 has it scheduled creating any nodes?