Hi all, Is it possible to migrate an existing EKS ...
# general
a
Hi all, Is it possible to migrate an existing EKS cluster to EKS auto?
a
the docs say eks auto does not support daemonsets.
a
How do we create a custom nodePool other than built-in eks auto mode? I want to stack up cluster with GPU nodes
a
Managed node group
Oh. I am not sure about that in eks auto.
a
I was assuming EKS auto mode should manage the nodes, but built in node pool types are system, and general-purpose only. Wondering if Pulumi supports creation gpu node pool creation for EKS in auto mode.
q
Yes, you can absolutely do that! Custom EKS Auto Mode Node Pools are managed with
NodePool
CRs. You can create those using the pulumi-kubernetes provider: CustomResource
a
Thanks Florian!
@quick-house-41860 I tried adding a custom nodepool using CRD
Copy code
const cluster = new eks.Cluster(
      `${regionalNamespace}-cluster`,
      {
        name: `${regionalNamespace}`,
        version: K8S_VERSION,
        vpcId: ...,
        privateSubnetIds: ...,
        publicSubnetIds: ...,
        enabledClusterLogTypes: ['api', 'audit', 'authenticator'],
        tags: projectTags,
        endpointPrivateAccess: true,
        endpointPublicAccess: true,
        nodeAssociatePublicIpAddress: false,
        gpu: true,
        autoMode: {
          enabled: true,
          createNodeRole: true,
          computeConfig: {
            nodePools: ['general-purpose'],
          },
        },
        authenticationMode: eks.AuthenticationMode.Api,
        accessEntries: ...,
        providerCredentialOpts:...,
      { provider: ... },
    );
const nodePool = new kubernetes.apiextensions.CustomResource(
      `${regionalNamespace}-nodepool`,
      {
        apiVersion: '<http://karpenter.sh/v1|karpenter.sh/v1>',
        kind: 'NodePool',
        metadata: {
          name: `${regionalNamespace}-gpu-nodepool`,
          namespace: 'kube-system',
          clusterName: cluster.eksCluster.name,
        },
        spec: {
          template: {
            spec: {
              taints: [
                {
                  key: '<http://nvidia.com/gpu|nvidia.com/gpu>',
                  effect: 'NoSchedule',
                },
              ],
              nodeClassRef: {
                name: 'default',
                group: '<http://eks.amazonaws.com|eks.amazonaws.com>',
                kind: 'NodeClass',
              },
              requirements: [
                {
                  key: '<http://karpenter.sh/capacity-type|karpenter.sh/capacity-type>',
                  operator: 'In',
                  values: ['on-demand'],
                },
                {
                  key: '<http://eks.amazonaws.com/instance-family|eks.amazonaws.com/instance-family>',
                  operator: 'In',
                  values: ['g5'],
                },
                {
                  key: '<http://nvidia.com/gpu.count|nvidia.com/gpu.count>',
                  operator: 'In',
                  values: ['1'],
                },
              ],
            },
          },
          limits: {
            gpu: 4,
          },
        },
      },
      {
        provider: ...,
      },
    );
But it gives me this error while creating CRD
And when I re-run deployment after manually logging in cluster it failed with different error
.metadata.clusterName: field not declared in schema
q
There's no
metadata.clusterName
property in k8s. Did you want to add this as a label instead?
a
but how NodePool crd mapped to cluster?
Also why I got API no-reachable error when deployed 1st time?
q
You deploy the CR to the cluster. It's the
provider
you define in the resource options. If you don't define any it uses your current k8s context - that's why you most likely got an error (because you weren't logged in?)
a
Ahh that could be it