This message was deleted.
# aws
s
This message was deleted.
s
What does your code for provisioning the cluster look like?
It looks to me like one of the outputs that you're trying to
map
over is undefined
p
Copy code
prod_eks_cluster = eks.Cluster(f"{tenant_name}-prod-eks-cluster",
        role_mappings=[
            eks.RoleMappingArgs(
                groups=["system:masters"],
                role_arn=cluster_admin_role.arn,
                username="pulumi:admin-usr",
            )],
        vpc_id=prod_cluster_vpc_id,
        public_subnet_ids=[prod_pub_subnet_1_id, prod_pub_subnet_2_id ],
        private_subnet_ids=[prod_prv_subnet_1_id, prod_prv_subnet_2_id],
        node_subnet_ids=[prod_prv_subnet_1_id, prod_prv_subnet_2_id],
        node_associate_public_ip_address = True,
        endpoint_private_access=True,
        endpoint_public_access=True,
        skip_default_node_group=True,
        name=f"{tenant_name}-prod-dev-cluster",
        #version="1.22",
        fargate=False,
        instance_roles=[infra_node_group_role, application_node_group_role],
        provider_credential_opts=eks.KubeconfigOptionsArgs(
                                role_arn=output_role_arn,
                            ),
        storage_classes={"gp2": eks.StorageClassArgs(
            type='gp2', allow_volume_expansion=True, default=True, encrypted=True,)},
        enabled_cluster_log_types=["api", "audit", "authenticator"],
        opts=ResourceOptions(depends_on=[cluster_admin_role]),
    )

    prod_infra_node_group  = eks.ManagedNodeGroup(f"{tenant_name}-prod-dev-infra",
        cluster=prod_eks_cluster.core,
        node_group_name=f"{tenant_name}-prod-dev-infra",
        subnet_ids=[prod_prv_subnet_1_id, prod_prv_subnet_2_id],
        node_role_arn=infra_node_group_role.arn,
        instance_types=["t3.medium"],
        scaling_config=aws.eks.NodeGroupScalingConfigArgs(
            desired_size=4,
            min_size=1,
            max_size=6,
        ),
        taints=[aws.eks.NodeGroupTaintArgs(effect="NO_SCHEDULE", key="dedicated", value="infra-group")],
        opts=ResourceOptions(parent=prod_eks_cluster),
    )
s
hmm nothing obvious jumping out at me. I would check that all the outputs you're passing in resolve to what you expect. A handy function for that is:
Copy code
def pdebug(output):
    """
    Print debugging for Pulumi outputs.

    The best way to use this function is to add it to an apply chain. So given an output
    like this:

        output = namespace.metadata.apply(lambda metadata: metadata.name)

    You can use pdebug to debug it at various points like this:

        output = (
            namespace.metadata
            .apply(pdebug)
            .apply(lambda metadata: metadata.name)
            .apply(pdebug)
        )

    This will print the metadata and the result after the name has been extracted.
    """
    <http://pulumi.log.info|pulumi.log.info>(
        json.dumps(
            output,
            indent=4,
        )
    )
    return output
the problem is specifically with the managed node group, I wonder if making
prod_eks_cluster
the parent could be causing some weird issues as well.
If you look at the relevant source code https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/nodegroup.ts the only thing they are calling
map
on is the
extraNodeSecurityGroups
and
roles
which I think corresponds to
instance_roles