So, I have a race condition present in one of my P...
# kubernetes
b
So, I have a race condition present in one of my Pulumi programs, and I'm trying to figure out if there's a way to solve this problem. Basically, I have a
Deployment
that uses a
ConfigMap
, and also snags the
name
property of the
ConfigMap
for injecting as an environment variable. The abbreviated code is:
Copy code
const myConfigMap = new k8s.core.v1.ConfigMap(...);
const deployment = new k8s.apps.v1.Deployment(
  "my-deployment",
  {
    spec: {
      template: {
        spec: {
          containers: [
            {
              name: "example",
              env: [
                {
                  name: "CONFIG_MAP_NAME",
                  value: myConfigMap.metadata.name,
                },
              ],
            },
          ],
        },
      },
    },
  },
);
My problem is that when I'm running an update, Pulumi does: 1. Create new
myConfigMap
. 2. Delete old
myConfigMap
. 3. Run update on
deployment
, including changes to point to new
myConfigMap
. This leaves a period of time between step 2 ending and step 3 ending where the injected name in my old version of
deployment
no longer points to a valid
ConfigMap
in our namespace. What I want to have Pulumi do is reverse steps 2 & 3: 1. Create new
myConfigMap
. 2. Run update on
deployment
, including changes to point to new
myConfigMap
. 3. Delete old
myConfigMap
. Is there a way to do this in Pulumi (have cleanup / deletion happen conditional on other resource updates completing)?
s
hmm is there a reason why you aren't using something like
envFrom
instead?
b
So I am also using
envFrom
with that
ConfigMap
. 🙂 The use-case here is that this deployment needs to launch new pods that also mount in that
ConfigMap
, so I need to know the name to bind to.
s
but the code you posted above does not use
envFrom
what's your configmap resource look like? i believe if you do NOT use the
metadata.name
property (so you let Pulumi autoname the resource for you) it will handle it the way you want where it creates the new configmap and auto replace the pods before removing the old configmap
b
Whether or not there's an
envFrom
shouldn't affect the execution, I don't think. This is letting Pulumi name the resource for me - it's generating
metadata.name
on-the-fly. It's also possible that this only happens when the create operation fails for the deployment, since I noticed this earlier today when an updated failed partway through. Maybe it's not a generic race, but rather a poorly-handled error?
s
Whether or not there's an 
envFrom
 shouldn't affect the execution, I don't think.
sorry I should have been more clear that my question about this was tangential.
This is letting Pulumi name the resource for me - it's generating 
metadata.name
 on-the-fly.
thanks for confirming! I wasn't sure since the code you posted just has
...
where the configmap specs are
It's also possible that this only happens when the create operation fails for the deployment
huh? i thought your issue was you saw Pulumi would delete the old configmap before making any changes to the deployment.