Anyone else facing promise leaks when upgrade to pulumi-eks v3 <https://pulumi-community.slack.com/...
s
Anyone else facing promise leaks when upgrade to pulumi-eks v3 https://pulumi-community.slack.com/archives/CJ909TL6P/p1762336459113889
g
yes I was facing the same problem, but I managed to fix it. it is rather painful and delicate work, as I keep warning people that using opinionated public Component resources comes with a great pain. Eks package would be way better if it was designed as smaller modules to build your own eks cluster component rather than giant do all component
make sure you read how to guides https://www.pulumi.com/registry/packages/eks/how-to-guides/v3-migration/ I’d suggest postponing upgrade of
@pulumi/aws
to the last bit here are few code bits I had to modify
Copy code
const clusterPromise = resolveAccessEntries({ isProdEnvironment: lib.env.isProdEnvironment() }).then(
  (accessEntries) =>
    new lib.aws.eks.Cluster("services", {
      accessEntries,
      karpenterVersion: getVersion("karpenter"),
      clusterName,
      network,
      version: config.require("kubernetes-version"),
      nodepools: isLocalZone ? ["local-zone-on-demand"] : ["regular-on-demand", "amd64-spot"],
    })
);
const cluster = pulumi.output(clusterPromise);
And the transform on the
eks.Cluster
Copy code
{
        parent,
        transforms: [
          (args) => {
            // This transform is must due to @pulumi/eks v3 not being able to catch up with aws.eks.Cluster
            if (args.type === "aws:eks/cluster:Cluster") {
              args.props["bootstrapSelfManagedAddons"] = false;
              return { props: args.props, opts: args.opts };
            }
            return undefined;
          },
          (args) => {
            // We do not use the aws-auth config map because we use accessEntries
            // but it keeps causing drifts during refresh
            if (
              args.type === "kubernetes:core/v1:ConfigMap" &&
              args.props &&
              args.props.metadata &&
              args.props.metadata.name === "aws-auth" &&
              args.props.metadata.namespace === "kube-system"
            ) {
              // Set ignoreChanges for this resource
              return {
                props: args.props,
                opts: pulumi.mergeOptions(args.opts, {
                  ignoreChanges: ["data.mapRoles"],
                }),
              };
            }
            return undefined;
          },
        ],
      }
s
Thanks for reply. I did read this guide, what we are seeing is that eks cluster with a eks.managednodegroup works fine until you start creating k8s resources in pulumi as well that reference the cluster ManagedNodeGroup implementation creates pending promises when it accesses cluster.core, and these leak when we later create K8s resources.
g
Right, Yeah I remember something while experimenting with ManagedNoteGroup. I think what you are doing is something similar to
Copy code
const eksCluster = new eks.Cluster({...})
const nodeGroup = new eks.ManagedNodeGroup({cluster: eksCluster},{dependsOn:[eksCluster])
the problem here is that real resources (from aws provider) cannot depend on ComponentResources - there is issue somewhere about this exact problem what you need is to depend on outputs from the
eks.Cluster
Copy code
const eksCluster = new eks.Cluster({...})
const nodeGroup = new eks.ManagedNodeGroup({cluster:  eksCluster.cluster},{dependsOn:[eksCluster.cluster])
this is a one of the design flaws in
eks.Cluster
component