Hi, I’m using EKS, and want to migrate from the de...
# aws
b
Hi, I’m using EKS, and want to migrate from the default node group to a managed node group. How do I link the new nodes to the same security group as the old nodes? This was my previous code that generates the cluster and nodes:
Copy code
// Create an EKS cluster with the default configuration.
export const cluster = new eks.Cluster(addPrefix("cluster"), {
  vpcId: stampVpc.id,
  privateSubnetIds: stampVpc.privateSubnetIds,
  publicSubnetIds: stampVpc.publicSubnetIds,
  nodeAssociatePublicIpAddress: false,
  encryptRootBlockDevice: true,
  version: config.require("eks.version"),

  desiredCapacity: config.requireNumber("eks.desiredCapacity"),
  minSize: config.requireNumber("eks.minSize"),
  maxSize: config.requireNumber("eks.maxSize"),
  instanceType: config.require<aws.ec2.InstanceType>("eks.instanceType"),
  nodeAmiId: config.get("eks.ami") ?? latestAmiId,

  enabledClusterLogTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"],
  endpointPublicAccess: true, // TODO: Change this...
  endpointPrivateAccess: true,
  createOidcProvider: true,

  roleMappings: [
    {
      groups: ["system:masters"],
      roleArn: deployerAdminRole.arn,
      username: "argocd-deployer"
    }
  ],

  publicAccessCidrs: CNC_IPS,
  encryptionConfigKeyArn: clusterEncryptionKey.arn,
  providerCredentialOpts: {
    profileName: AWS_PROFILE,
    roleArn: AWS_ROLE_ARN
  }
});
And this is the new code:
Copy code
cluster = new eks.Cluster(addPrefix("cluster"), {
  skipDefaultNodeGroup: true,
  vpcId: stampVpc.id,
  privateSubnetIds: stampVpc.privateSubnetIds,
  publicSubnetIds: stampVpc.publicSubnetIds,
  nodeAssociatePublicIpAddress: false,
  encryptRootBlockDevice: true,
  instanceRole: instanceRole,
  version: config.require("eks.version"),

  enabledClusterLogTypes: ["api", "audit", "authenticator", "controllerManager", "scheduler"],
  endpointPublicAccess: true, // TODO: Change this...
  endpointPrivateAccess: true,
  createOidcProvider: true,

  roleMappings: [
    {
      groups: ["system:masters"],
      roleArn: deployerAdminRole.arn,
      username: "argocd-deployer"
    }
  ],

  publicAccessCidrs: CNC_IPS,
  encryptionConfigKeyArn: clusterEncryptionKey.arn,
  providerCredentialOpts: {
    profileName: AWS_PROFILE,
    roleArn: AWS_ROLE_ARN
  },
});

// Create a simple AWS managed node group using a cluster as input.
managedNodeGroup = eks.createManagedNodeGroup("my-cluster-ng", {
  cluster: cluster,
  nodeGroupName: "aws-managed-ng1",
  nodeRole: instanceRole,
  amiType: "AL2_x86_64",
  instanceTypes: [config.require<aws.ec2.InstanceType>("eks.instanceType")],
  // releaseVersion: config.get("eks.ami") ?? latestAmiId,
  // labels: { "ondemand": "true" },
  scalingConfig: {
    minSize: config.requireNumber("eks.minSize"),
    maxSize: config.requireNumber("eks.maxSize"),
    desiredSize: config.requireNumber("eks.desiredCapacity")
  },
}, cluster);
However the security group of the nodes changes and other resources that take the
cluster.nodeSecurityGroup.id
get messed up. How do I link the same security group with the new nodes? Thanks
b
However the security group of the nodes changes and other resources that take the cluster.nodeSecurityGroup.id get messed up.
Can you elaborate here? it's not clear what the problem is I'm afraid
b
How do I associate the managed node group with a specific nodeSecurityGroup? I want to associate it with the nodeSecurityGroup created by the cluster