https://pulumi.com logo
Title
a

acoustic-window-73051

01/10/2022, 3:41 PM
so starting last thursday or so when I try to bring up my EKS stack I get: (First pulumi up): Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition (if I pulumi up again I instead get): error: an unhandled error occurred: Program exited with non-zero exit code: -1 any ideas? I've since version-locked my cluster to 1.20, and rolled back to pulumi 3.19 to no avail.
ok, so it's 2 seperate isues... the unhandled error is I0110 15:58:45.075148 6411 step_executor.go:327] StepExecutor worker(138): step same on urn😛ulumi:ASC-PRD-adamw-189--aws-stack::jxe::custom:resource:VPC$aws:ec2/vpc:Vpc$custom:resource:EFS$aws:efs/fileSystem:FileSystem$aws:backup/vault:Vault$aws:backup/plan:Plan::ASC-PRD-adamw-189--aws-stack--BUPl pulumi😛ulumi:Stack jxe-ASC-PRD-adamw-189--aws-stack running... error: an unhandled error occurred: Program exited with non-zero exit code: -1 so ideas on either of these issues is appreciated
b

billowy-army-68599

01/10/2022, 4:41 PM
the first one is easy to fix with a transformation, the second one, you may need to open an issue
a

acoustic-window-73051

01/10/2022, 6:28 PM
I've never used transformations before, and I dont reference apiextensions.k8s.io/v1beta1 in any of my code, so I'm not sure what resources to apply a transform to =/
b

billowy-army-68599

01/10/2022, 6:38 PM
is it a helm chart?
a

acoustic-window-73051

01/10/2022, 6:38 PM
no, this is while launching the EKS cluster itself
b

billowy-army-68599

01/10/2022, 6:49 PM
ah yes, I'll file a bug for that
a

acoustic-window-73051

01/10/2022, 6:50 PM
can you thnk of a workaround for me in the shortterm?
b

billowy-army-68599

01/10/2022, 6:50 PM
that shouldn't result in your cluster failing, just a warning
what's your code?
a

acoustic-window-73051

01/10/2022, 6:51 PM
I got about 3k lines of code, lemme grab the cluster stuff
import pulumi_eks as eks
import pulumi

class EKSCluster(pulumi.ComponentResource):
    """ builds EKS cluster """

    def __init__(self,
            config=None,
            stack_artifacts=None,
            ):
        super().__init__(
                'custom:resource:EKS',
                __name__,
                {},
                opts=pulumi.ResourceOptions(parent=stack_artifacts['network'].get('private_subnets')[0])
                )
        if config is None:
            self._config = {}
        else:
            self._config = config
        self._stack_artifacts = stack_artifacts

        self._artifacts = {}
        self._artifacts['cluster'] = self.__get_cluster()
        self._artifacts['node_sg'] = self._artifacts['cluster'].node_security_group
        #self._stack_artifacts['sec_grps'].add_orch_and_vpns_on_port(
        #        sec_grp=self._artifacts['node_sg'],
        #        port=22,
        #        sec_grp_name='eks_node_sg',
        #        parent=self._artifacts['cluster'],
        #        )

    def get(self, key = None):
        """ expose artifacts """
        if key is None:
            return self._artifacts
        return self._artifacts[key]

    def __get_cluster_name(self):
        return f'{self._config["stackname"]}--EKSC'

    def __get_cluster(self):
        cluster_conf = {}
        cluster_conf['name'] = self.__get_cluster_name()
        cluster_conf['eks_sg'] = self._stack_artifacts['sec_grps'].get_eks()

        private_subnet_objs=self._stack_artifacts['network'].get('private_subnets')
        cluster_conf['private_subnet_ids']=[]
        for private_subnet_obj in private_subnet_objs:
            cluster_conf['private_subnet_ids'].append(private_subnet_obj.id)

        cluster_conf['nodes_conf'] = self.__build_node_conf(name=cluster_conf['name'])
        cluster_conf['cluster_role'] = self._stack_artifacts['roles'].get_eks_cluster_role()
        cluster_conf['node_role'] = self._stack_artifacts['roles'].get_eks_node_role()

        cluster = self.__build_cluster(cluster_conf=cluster_conf)
        return cluster

    def __build_node_conf(self, name):
        with open('./user-data/node-user-data') as file:
            node_user_data = file.read()
        nodes_conf = eks.ClusterNodeGroupOptionsArgs(
                auto_scaling_group_tags={
                    "Name": name,
                    "JX_ENV": self._config["stackname"]
                    },
                desired_capacity=self._config['location']['EKS_Nodes_Desired'],
                max_size=self._config['location']['EKS_Nodes_Max'],
                min_size=self._config['location']['EKS_Nodes_Min'],
                encrypt_root_block_device=True,
                instance_type=self._config['location']['EKS_Nodes_Type'],
                node_associate_public_ip_address=False,
                node_public_key=self._config['location']['sshpubkey'],
                node_root_volume_size=self._config['location']['EKS_Nodes_HDD'],
                node_user_data=node_user_data,
                version='1.20', #FIXME Warning: <http://apiextensions.k8s.io/v1beta1|apiextensions.k8s.io/v1beta1> CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use <http://apiextensions.k8s.io/v1|apiextensions.k8s.io/v1> CustomResourceDefinition
                )
        return nodes_conf

    def __build_cluster(self, cluster_conf):
        cluster = eks.Cluster(resource_name=cluster_conf['name'],
            name=cluster_conf['name'],
            cluster_security_group=cluster_conf['eks_sg'],
            create_oidc_provider=True,
            encryption_config_key_arn=self._config['location']['KMSKey'],
            endpoint_private_access=True,
            endpoint_public_access=False,
            instance_role=cluster_conf['node_role'],
            kubernetes_service_ip_address_range=self._config['location']['EKS_Nodes_CIDR'],
            node_group_options=cluster_conf['nodes_conf'],
            private_subnet_ids=cluster_conf['private_subnet_ids'],
            public_subnet_ids=self._stack_artifacts['network'].get('public_subnets'),
            service_role=cluster_conf['cluster_role'],
            tags={
                "Name": cluster_conf['name'],
                "JX_ENV": self._config["stackname"]
                },
            vpc_id=self._stack_artifacts['network'].get('vpc').id,
            opts=pulumi.ResourceOptions(parent=self),
            )
        return cluster
most of the specific config is read from conf files and loaded into dicts
b

billowy-army-68599

01/10/2022, 6:55 PM
and this used to work?
a

acoustic-window-73051

01/10/2022, 6:56 PM
yeah, for a few months now, started giving me grief last weds/thurs
b

billowy-army-68599

01/10/2022, 6:56 PM
did you upgrade versions of pulumi eks
?
a

acoustic-window-73051

01/10/2022, 6:57 PM
I dont think so, not intentionally at least, where's that located? requirements.txt?
pulumi-eks==0.31.0
you still around Jaxx? This mean anything to you?
I0110 19:34:07.109512   26332 eventsink.go:59] Invoking function: tok=aws:ssm/getParameter:getParameter asynchronously
I0110 19:34:07.110324   26332 eventsink.go:59] , obj={"name":"/aws/service/eks/optimized-ami/1.20/amazon-linux-2/recommended/image_id"}
I0110 19:34:07.111176   26332 eventsink.go:59] RegisterResource RPC prepared: t=eks:index:Cluster, name=ASC-PRD-adamw-189--stack--EKSC
    pulumi:pulumi:Stack jxe-ASC-PRD-adamw-189--stack running error: an unhandled error occurred: Program exited with non-zero exit code: -1
I0110 19:34:17.311108   26332 eventsink.go:59] Reading SSM Parameter: {
I0110 19:34:17.311358   26332 eventsink.go:59]   Name: "/aws/service/eks/optimized-ami/1.20/amazon-linux-2/recommended/image_id",
I0110 19:34:17.311592   26332 eventsink.go:59]   WithDecryption: true
I0110 19:34:17.311767   26332 eventsink.go:59] }
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.20/amazon-linux-2/recommended/image_id --region us-east-1 --query "Parameter.Value" --output text
ami-03c45fa21d6d9e641
ok @billowy-army-68599 just fyi: I've upgraded pulumi-eks to .36, upgraded pulumi lib, and upgraded the cli to latest, modified my code to fit with the changes in .36, version locked the k8s to 1.18, and its working now.
🎉 1