is there any way to get pulumi to force a deployme...
# kubernetes
b
is there any way to get pulumi to force a deployment update when I change a configmap that is mounted into a container as a volume?
m
Are both the configmap and the deployment/container part of the same stack?
h
mark the configmap as immutable and make sure the deployment has a dependency on it
it will be replaced instead of updated and the deployment should be updated
b
nice! that gets me a little closer. My deployment is getting replaced though. Is there any way to ensure it's updated rather than replaced so I get a rolling update?
b
my code looks something like this:
Copy code
configmap1 = k8s.core.v1.ConfigMap(
            self.pulumi_name + "RootConfigMap",
            metadata={
                "name": self.service_name + "-configmap1",
                "namespace": "responsive"
            },
            data={
                "foo": "bar"
            },
            immutable=True,
            opts=pulumi.ResourceOptions(provider=provider, depends_on=resources)
        )

        secret1 = k8s.core.v1.Secret(
            self.pulumi_name + "Secret",
            metadata={
                "name": self.service_name + "-secret1",
                "namespace": "responsive"
            },
            string_data="some data",
            immutable=True,
            opts=pulumi.ResourceOptions(provider=provider, depends_on=resources)
        )

        deployment = k8s.apps.v1.Deployment(
            self.pulumi_name + "Deployment",
            api_version="apps/v1",
            kind="Deployment",
            metadata=k8s.meta.v1.ObjectMetaArgs(
                name=self.service_name,
                labels={"app": self.service_name},
                namespace="responsive"
            ),
            spec=k8s.apps.v1.DeploymentSpecArgs(
                replicas=1,
                selector=k8s.meta.v1.LabelSelectorArgs(
                    match_labels={
                        "app": self.service_name
                    }
                ),
                template=k8s.core.v1.PodTemplateSpecArgs(
                    metadata=k8s.meta.v1.ObjectMetaArgs(
                        labels={
                            "app": self.service_name,
                        },
                    ),
                    spec=k8s.core.v1.PodSpecArgs(
                        containers=[k8s.core.v1.ContainerArgs(
                            name=self.service_name,
                            image=self.image,
                            image_pull_policy="IfNotPresent",
                            volume_mounts=[
                                k8s.core.v1.VolumeMountArgs(
                                    name="configmap1-volume",
                                    mount_path="/mnt/configmap1"
                                ),
                                k8s.core.v1.VolumeMountArgs(
                                    name="secret1-volume",
                                    mount_path="/mnt/configmap2"
                                ),
                            ],
                        )],
                        volumes=[
                            k8s.core.v1.VolumeArgs(
                                name="configmap1-volume",
                                config_map=k8s.core.v1.ConfigMapVolumeSourceArgs(
                                    name=configmap1.metadata.name,
                                )
                            ),
                            k8s.core.v1.VolumeArgs(
                                name="secret1-volume",
                                secret=k8s.core.v1.SecretVolumeSourceArgs(
                                    secret_name=secret1.metadata.name
                                )
                            ),
                        ]
                    )
                )
            ),
            opts=pulumi.ResourceOptions(provider=provider, depends_on=resources)
        )
I found this: https://archive.pulumi.com/t/12019387/is-there-a-good-way-to-trigger-a-rolling-restart-on-a-deploy
yeah this is what I was going to fall back to. Set an annotation on the deployment with the value set to the checksum of all the configmaps/secrets
h
you don’t need dependsOn since the dependency is already there, and that might be causing the replacement. if you omit metadata.name it will get a randomized name which is essentially the same idea as the random env var.
b
depends_on
is set to some other resources that the deployment depends on - I need that to ensure it's deployed after them. It doesn't include the configmap/secret.
if you omit metadata.name it will get a randomized name which is essentially the same idea as the random env var
got it. let me try this. It would be nice to use a predictable name for other reasons (e.g. I have some runbooks that reference the name)