<@UBAJ7TD53> re pulumi’s replacement strategy when...
# general
g
@creamy-potato-29402 re pulumi’s replacement strategy when an k8s secret / configmap has changed: Currently it seems pulumi always first deletes the old deployment completely and then creates a new one, which is very disruptive to deployments which need to be up all the time. Isn’t it possible to simply inject some hash into the deployment’s annotations to force a rolling update instead of deleting and recreating the entire thing?
c
@glamorous-printer-66548 that should definitely not be true.
what does the object definition look like?
g
note, my deployments have fixed names
c
Why would your deployments be deleted?
g
ok good to know that this should be not the case
let me try to come up with a simple example for reproduction
c
Your deployment should almost never be deleted.
That is very odd.
There is only one field that should trigger a replacement of your deployments. That field is the spec selector.
the
ConfigMap
might get replaced, but that would trigger a rollout, not a replacement of the
Deployment
g
that’s what I would hope for, but that’s not what I’m seeing
i can reproduce this reliably
c
What is happening that suggests that the
ConfigMap
is triggering the
Deployment
to be replaced?
g
just trying to make a simple example
c
@glamorous-printer-66548 I just tested it now, I don’t see this behavior in the config rollout example
I see the
ConfigMap
replaced, and then the
Deployment
is updated.
How sure are you on a scale of 1-10 that you’re not changing
.spec.selector
?
g
Copy code
import * as k8s from '@pulumi/kubernetes';
const someDataConfigMap = new k8s.core.v1.ConfigMap('some-config', {
  metadata: {
    name: 'some-config'
  },
  data: {
    // WHENEVER the value of foo is changed (i.e. change it to `bar2`) pulumi will first delete the deployment and then recreate it
    foo: 'bar1'
  }
});

const APP_NAME = 'foo';
const labels = {
  app: APP_NAME
};

const deployment = new k8s.apps.v1.Deployment(APP_NAME, {
  metadata: {
    name: APP_NAME
  },
  spec: {
    selector: {
      matchLabels: labels
    },
    template: {
      metadata: {
        labels
      },
      spec: {
        containers: [
          {
            name: 'main',
            image: 'nginx'
            // command: 'f'
          }
        ],
        volumes: [
          {
            name: 'some-data',
            configMap: {
              name: someDataConfigMap.metadata.apply(metadata => metadata.name)
            }
          }
        ]
      }
    }
  }
});
there you go
this should make it reproducable
Copy code
{
  "name": "repro-pulumi-bugs",
  "devDependencies": {
    "@types/node": "latest"
  },
  "dependencies": {
    "@pulumi/kubernetes": "0.17.4",
    "@pulumi/pulumi": "latest"
  }
}
i’m not changing the selector as you can see
so i’m sure 9.5 😄
are you able to reproduce it?
c
Looking now
This example does what you say
g
👍
c
@glamorous-printer-66548 I believe now that this is actually the engine’s fault, not the kube provider’s fault. 🙂
The engine is deciding to delete before replacement without consulting the kube provider at all.
I’m still figuring out the root cause, though
The bottom line is, unequivocally, we should 100% not ever be replacing
Deployment
unless you change
.spec.selector
, so this is is absolutely a bug and we will fix it ASAP
stay tuned as I figure out what the problem is.
g
👍
c
@glamorous-printer-66548 figured it out.
If you need to delete-before-replace, the engine believes you need to delete-before-replace all of the dependent resources.
determining if this is actually necessary…
@glamorous-printer-66548 the work-around for you is, don’t manually name the
ConfigMap
.
you should be good in that case.
@glamorous-printer-66548 if you can’t do that, then you should just make the name the string for now, instead of doing a formal dependency.
Filed an issue, I’ll get to the bottom of this when everyone is awake again.
I do not believe the engine is doing the right thing here, but I could be wrong.
@glamorous-printer-66548 we’re updating the issue with the medium-term plan now. Are you all set with the work around?
g
what’s the plan and what’s medium term? 🙂
yeah i guess for now i’ll just not lock the name of the configmap
c
@glamorous-printer-66548 my guess is that will solve this within a couple weeks but it's not my decision. @microscopic-florist-22719 ?
m
Certainly within a couple weeks. Hopefully much sooner (< a week).
c
Lol didn't want to over promise