https://pulumi.com logo
#kubernetes
Title
# kubernetes
b

brave-ambulance-98491

03/29/2020, 10:00 PM
I'm seeing my
Deployment
resources show up as "replace" when I update
ConfigMap
values. Ideally, these should just "update" in that case. Is this a known issue?
b

better-rainbow-14549

03/30/2020, 8:32 AM
i think this is intended behaviour (possibly related to autonaming?). a new resource is created and once it's finished deploying and successfully starts up the services get updated to point at it and the old one is deleted.
b

brave-ambulance-98491

03/30/2020, 3:57 PM
This breaks things like pod disruptions budgets, specifically for
Deployment
, so I don't love it.
g

gorgeous-egg-16927

03/30/2020, 4:43 PM
Pulumi does this because Pods don’t pick up changed ConfigMap values by default; it only happens when the Pod restarts. This behavior catches a lot of users off guard, so we made replacement the default. That said, I’m interested to hear more details if this is causing problems for you. Can you file an issue on the pulumi-kubernetes repo?
b

brave-ambulance-98491

03/30/2020, 4:44 PM
g

gorgeous-egg-16927

03/30/2020, 4:57 PM
Actually, I realized that I was mistaken in my earlier response. We don’t replace the Deployment by default, just update it (same as a rolling update with kubectl). Confirmed this locally, so your original issue is something else going on.
b

brave-ambulance-98491

03/30/2020, 4:59 PM
Awesome, thank you for looking at this! I'll see if I can find a repro. It may only be an issue for items that have a static
ConfigMap
name, where the
Deployment
spec isn't changing at all. Also, did the preview show a
replace
or an
update
for the
Deployment
for you?
g

gorgeous-egg-16927

03/30/2020, 4:59 PM
It showed an
update
It could very well have to do with static names
b

brave-ambulance-98491

03/30/2020, 5:07 PM
Thanks! I'll file a bug on this if I get a repro case.
👍 1
o

orange-policeman-59119

05/20/2020, 1:53 AM
I'm seeing the same issue (v1beta1 Deployment is being "replaced" instead of "updated"), I think this is causing a disruption for a deployment I'm monitoring right now:
Copy code
# many lines removed
~ spec      : {
    ~ template: {
        ~ spec    : {
            ~ containers                   : [
                ~ [0]: {
                        ~ env            : [
                        # many lines removed
                        ~ [14]: {
                                ~ name     : "ENV_VAR_A" => "SOME_NEW_ENV_VAR"
                                - value    : "env-var-a-value"
                                + valueFrom: {
                                    + secretKeyRef: {
                                        + key : "a-key"
                                        + name: "on-a-secret-resource"
                                    }
                                }
                            }
                        + [15]: {
                                + name : "ENV_VAR_A"
                                + value: "env-var-a-value"
                            }
Is the reason why because: • A new env var was introduced into a higher position • This shifted all the other env vars "down" • And during that shift, the secretKeyRef was added/removed? If so I think that's a defect, we already add an annotation to our pod spec with the hash of the secret key data so that when the secret updates, our pods do too
Like so:
Copy code
~ template: {
                metadata: {
                    annotations: {
                        checksum/secrets: "[secret]"
                    }
b

brave-ambulance-98491

05/20/2020, 4:55 PM
@orange-policeman-59119 - Are you specifying a name when you create the
ConfigMap
(not having Pulumi generate a name for you)? My suspicion was that this was what triggered this bug.
7 Views