Hi All, I’m attempting to create an EKS cluster w...
# aws
p
Hi All, I’m attempting to create an EKS cluster with managed node groups but I’m receiving the below error. Has anyone seen this before or know how I can get around it? I’d appreciate any help I can get on this.
Copy code
Diagnostics:
 pulumi:pulumi:Stack (test-dev):
  error: Program failed with an unhandled exception:
  Traceback (most recent call last):
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/resource.py", line 916, in do_rpc_call
    return monitor.RegisterResource(req)
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/grpc/_channel.py", line 1030, in __call__
    return _end_unary_response_blocking(state, call, False, None)
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking
    raise _InactiveRpcError(state) # pytype: disable=not-instantiable
  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
  	status = StatusCode.UNKNOWN
  	details = "Cannot read properties of undefined (reading 'map')"
  	debug_error_string = "UNKNOWN:Error received from peer {created_time:"2023-07-21T13:33:24.437348-04:00", grpc_status:2, grpc_message:"Cannot read properties of undefined (reading \'map\')"}"
  >
   
  During handling of the above exception, another exception occurred:
   
  Traceback (most recent call last):
   File "/Users/ernest/.pulumi/bin/pulumi-language-python-exec", line 197, in <module>
    loop.run_until_complete(coro)
   File "/usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
    return future.result()
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/stack.py", line 136, in run_in_stack
    await run_pulumi_func(lambda: Stack(func))
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/stack.py", line 51, in run_pulumi_func
    await wait_for_rpcs()
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/stack.py", line 120, in wait_for_rpcs
    raise exception
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/rpc_manager.py", line 71, in rpc_wrapper
    result = await rpc
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/output.py", line 103, in is_value_known
    return await is_known and not contains_unknowns(await future)
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/output.py", line 103, in is_value_known
    return await is_known and not contains_unknowns(await future)
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/output.py", line 103, in is_value_known
    return await is_known and not contains_unknowns(await future)
   [Previous line repeated 19 more times]
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/resource.py", line 921, in do_register
    resp = await asyncio.get_event_loop().run_in_executor(None, do_rpc_call)
   File "/usr/local/Cellar/python@3.9/3.9.10/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/resource.py", line 918, in do_rpc_call
    handle_grpc_error(exn)
   File "/Users/ernest/portx/pulumi-portx-hosed-infrastructure-management/vnext_test/venv/lib/python3.9/site-packages/pulumi/runtime/settings.py", line 273, in handle_grpc_error
    raise grpc_error_to_exception(exn)
  Exception: Cannot read properties of undefined (reading 'map')
  error: TypeError: Cannot read properties of undefined (reading 'map')
    at /snapshot/eks/bin/nodegroup.js:894:32
    at /snapshot/eks/node_modules/@pulumi/pulumi/output.js:257:35
    at Generator.next (<anonymous>)
    at /snapshot/eks/node_modules/@pulumi/pulumi/output.js:21:71
    at new Promise (<anonymous>)
    at __awaiter (/snapshot/eks/node_modules/@pulumi/pulumi/output.js:17:12)
    at applyHelperAsync (/snapshot/eks/node_modules/@pulumi/pulumi/output.js:236:12)
    at /snapshot/eks/node_modules/@pulumi/pulumi/output.js:190:65
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
  error: TypeError: Cannot read properties of undefined (reading 'map')
    at /snapshot/eks/bin/nodegroup.js:894:32
    at /snapshot/eks/node_modules/@pulumi/pulumi/output.js:257:35
    at Generator.next (<anonymous>)
    at /snapshot/eks/node_modules/@pulumi/pulumi/output.js:21:71
    at new Promise (<anonymous>)
    at __awaiter (/snapshot/eks/node_modules/@pulumi/pulumi/output.js:17:12)
    at applyHelperAsync (/snapshot/eks/node_modules/@pulumi/pulumi/output.js:236:12)
    at /snapshot/eks/node_modules/@pulumi/pulumi/output.js:190:65
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
s
What does your code for provisioning the cluster look like?
It looks to me like one of the outputs that you're trying to
map
over is undefined
p
Copy code
prod_eks_cluster = eks.Cluster(f"{tenant_name}-prod-eks-cluster",
        role_mappings=[
            eks.RoleMappingArgs(
                groups=["system:masters"],
                role_arn=cluster_admin_role.arn,
                username="pulumi:admin-usr",
            )],
        vpc_id=prod_cluster_vpc_id,
        public_subnet_ids=[prod_pub_subnet_1_id, prod_pub_subnet_2_id ],
        private_subnet_ids=[prod_prv_subnet_1_id, prod_prv_subnet_2_id],
        node_subnet_ids=[prod_prv_subnet_1_id, prod_prv_subnet_2_id],
        node_associate_public_ip_address = True,
        endpoint_private_access=True,
        endpoint_public_access=True,
        skip_default_node_group=True,
        name=f"{tenant_name}-prod-dev-cluster",
        #version="1.22",
        fargate=False,
        instance_roles=[infra_node_group_role, application_node_group_role],
        provider_credential_opts=eks.KubeconfigOptionsArgs(
                                role_arn=output_role_arn,
                            ),
        storage_classes={"gp2": eks.StorageClassArgs(
            type='gp2', allow_volume_expansion=True, default=True, encrypted=True,)},
        enabled_cluster_log_types=["api", "audit", "authenticator"],
        opts=ResourceOptions(depends_on=[cluster_admin_role]),
    )

    prod_infra_node_group  = eks.ManagedNodeGroup(f"{tenant_name}-prod-dev-infra",
        cluster=prod_eks_cluster.core,
        node_group_name=f"{tenant_name}-prod-dev-infra",
        subnet_ids=[prod_prv_subnet_1_id, prod_prv_subnet_2_id],
        node_role_arn=infra_node_group_role.arn,
        instance_types=["t3.medium"],
        scaling_config=aws.eks.NodeGroupScalingConfigArgs(
            desired_size=4,
            min_size=1,
            max_size=6,
        ),
        taints=[aws.eks.NodeGroupTaintArgs(effect="NO_SCHEDULE", key="dedicated", value="infra-group")],
        opts=ResourceOptions(parent=prod_eks_cluster),
    )
s
hmm nothing obvious jumping out at me. I would check that all the outputs you're passing in resolve to what you expect. A handy function for that is:
Copy code
def pdebug(output):
    """
    Print debugging for Pulumi outputs.

    The best way to use this function is to add it to an apply chain. So given an output
    like this:

        output = namespace.metadata.apply(lambda metadata: metadata.name)

    You can use pdebug to debug it at various points like this:

        output = (
            namespace.metadata
            .apply(pdebug)
            .apply(lambda metadata: metadata.name)
            .apply(pdebug)
        )

    This will print the metadata and the result after the name has been extracted.
    """
    <http://pulumi.log.info|pulumi.log.info>(
        json.dumps(
            output,
            indent=4,
        )
    )
    return output
the problem is specifically with the managed node group, I wonder if making
prod_eks_cluster
the parent could be causing some weird issues as well.
If you look at the relevant source code https://github.com/pulumi/pulumi-eks/blob/master/nodejs/eks/nodegroup.ts the only thing they are calling
map
on is the
extraNodeSecurityGroups
and
roles
which I think corresponds to
instance_roles