I'm seeing my `Deployment` resources show up as "r...
# kubernetes
b
I'm seeing my
Deployment
resources show up as "replace" when I update
ConfigMap
values. Ideally, these should just "update" in that case. Is this a known issue?
b
i think this is intended behaviour (possibly related to autonaming?). a new resource is created and once it's finished deploying and successfully starts up the services get updated to point at it and the old one is deleted.
b
This breaks things like pod disruptions budgets, specifically for
Deployment
, so I don't love it.
g
Pulumi does this because Pods don’t pick up changed ConfigMap values by default; it only happens when the Pod restarts. This behavior catches a lot of users off guard, so we made replacement the default. That said, I’m interested to hear more details if this is causing problems for you. Can you file an issue on the pulumi-kubernetes repo?
b
g
Actually, I realized that I was mistaken in my earlier response. We don’t replace the Deployment by default, just update it (same as a rolling update with kubectl). Confirmed this locally, so your original issue is something else going on.
b
Awesome, thank you for looking at this! I'll see if I can find a repro. It may only be an issue for items that have a static
ConfigMap
name, where the
Deployment
spec isn't changing at all. Also, did the preview show a
replace
or an
update
for the
Deployment
for you?
g
It showed an
update
It could very well have to do with static names
b
Thanks! I'll file a bug on this if I get a repro case.
👍 1
o
I'm seeing the same issue (v1beta1 Deployment is being "replaced" instead of "updated"), I think this is causing a disruption for a deployment I'm monitoring right now:
Copy code
# many lines removed
~ spec      : {
    ~ template: {
        ~ spec    : {
            ~ containers                   : [
                ~ [0]: {
                        ~ env            : [
                        # many lines removed
                        ~ [14]: {
                                ~ name     : "ENV_VAR_A" => "SOME_NEW_ENV_VAR"
                                - value    : "env-var-a-value"
                                + valueFrom: {
                                    + secretKeyRef: {
                                        + key : "a-key"
                                        + name: "on-a-secret-resource"
                                    }
                                }
                            }
                        + [15]: {
                                + name : "ENV_VAR_A"
                                + value: "env-var-a-value"
                            }
Is the reason why because: • A new env var was introduced into a higher position • This shifted all the other env vars "down" • And during that shift, the secretKeyRef was added/removed? If so I think that's a defect, we already add an annotation to our pod spec with the hash of the secret key data so that when the secret updates, our pods do too
Like so:
Copy code
~ template: {
                metadata: {
                    annotations: {
                        checksum/secrets: "[secret]"
                    }
b
@orange-policeman-59119 - Are you specifying a name when you create the
ConfigMap
(not having Pulumi generate a name for you)? My suspicion was that this was what triggered this bug.