a

    alert-zebra-27114

    7 months ago
    Howdy all, Are there any consensus on how to update an existing Kubernetes ConfigMap from Pulumi. It is the usual CoreDNS configuration in a EKS based cluster 🙂 I'm trying to do an import then update cycle via automation, but I'm not having too much success with this.
    p

    prehistoric-activity-61023

    7 months ago
    what kind of problem do you have?
    I’d say importing it and start managing it within pulumi seems like the easiest way to do that.
    a

    alert-zebra-27114

    7 months ago
    I want to avoid all manual steps... So... • I have a Python script that creates all the parts - VPC, subnets, security groups, VPN connections, peering, EKS, monitoring, etc... • I invoke Pulumi via the automation API - this allows we to find a number of values first in a configuration file (like regions, users, type of cluster (production/test/ci), etc) • I can invoke Pulumi up several times if needed - to import the CM the first time and then change the data part of the CM in the next round • I get all green lights from pulumi preview • but I get the error "inputs to import do not match the existing resource" from pulumi up. I will try to create a minimal example.
    I got it to work... The trick it is use proper ignore_changes. This small AWS example illustrates my "solution":
    """An AWS Python Pulumi program"""
    
    import pulumi
    import pulumi_aws_quickstart_vpc as aws_vpc
    import pulumi_eks as eks
    import pulumi_kubernetes as k8s
    
    profile = pulumi.Config('aws').require('profile')
    region = pulumi.Config('aws').require('region')
    
    variant = pulumi.Config().require('variant')
    <http://pulumi.log.info|pulumi.log.info>(f"{variant=}")
    
    # No default VPC
    vpc = aws_vpc.Vpc('vpc',
                      cidr_block='20.20.0.0/16',
                      availability_zone_config=[
                          aws_vpc.AvailabilityZoneArgs(
                              availability_zone=f"{region}a",
                              public_subnet_cidr="20.20.0.0/24",
                              private_subnet_a_cidr="20.20.1.0/24",
                          ), aws_vpc.AvailabilityZoneArgs(
                              availability_zone=f"{region}b",
                              private_subnet_a_cidr="20.20.3.0/24",
                          )]
                      )
    
    cluster = eks.Cluster("eks",
                          vpc_id=vpc.vpc_id,
                          public_subnet_ids=vpc.public_subnet_ids,
                          private_subnet_ids=vpc.private_subnet_ids,
                          provider_credential_opts=eks.KubeconfigOptionsArgs(profile_name=profile))
    
    kubernetes_provider = k8s.Provider("kubernetes_provider", kubeconfig=cluster.kubeconfig)
    
    rname = 'coredns-conf'
    # Just to see the difference
    wanted_coredns_conf = """# CHANGED
    .:53 {
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        errors
        health
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    """
    
    if variant == 'just-import':
        k8s.core.v1.ConfigMap(
            rname,
            metadata={
                'namespace': 'kube-system',
                'name': 'coredns',
            },
            opts=pulumi.ResourceOptions(
                provider=kubernetes_provider,
                import_='kube-system/coredns',
                ignore_changes=['data', 'metadata.annotations', 'metadata.labels']
            )
        )
    
    elif variant == 'declare':
        k8s.core.v1.ConfigMap(
            rname,
            metadata={
                'namespace': 'kube-system',
                'name': 'coredns',
            },
            data={
                'Corefile': wanted_coredns_conf
            },
            opts=pulumi.ResourceOptions(
                provider=kubernetes_provider,
                ignore_changes=['metadata.annotations', 'metadata.labels'],
            )
        )
    I run this project twice via automation. • First with variant="just-import", which will import the wanted ConfigMap • Then with variant="declare", which will change the data of the map For unknown reasons it does a delete-create operation instead of a fix... but at least it works for now.
    Forgot... the trick is to use the proper ignore_changes=[...] in the ResourceOptions