One more - I'm busy today :wink: Thank you for yo...
# kubernetes
c
One more - I'm busy today 😉 Thank you for your effort Pulumi Pros 🙏 What change triggers a redeploy of a Kubernetes Configmap? #8952
b
Short answer - it will redeploy if any of the properties has changed (metadata or actual contents).
What are you seeing in terms of behavior?
c
I'll have to vet this then. I updated the Pulumi.dev.yaml > myproject:appconfig.port for example. This should have resulted in a new ConfigMap creation, not a value replace in the current configmap right? The configmap is immutable, I'd expect any change of content to result in replace action. I' don't think it was.
q
Did you mark the config map as immutable? They're not immutable by default
b
I think Pulumi always treats ConfigMaps and Secrets as immutable, even if they’re not immutable from a Kubernetes perspective.
@chilly-plastic-75584 I’d expect the ConfigMap to get replaced if any of the properties changed. Are you sure the values are changing in the way you think they are? You could always print the YAML you’re serializing and compare.
c
sure thing. I'll investigate further. Good to know. Also one more related question. Since the configmap creates a file mount, and their is a container that then uses this configmap.... I'm assuming that the pod would need to restart to detect the new ConfigMap changes, right? Any articles on explicitly defining the dependency between them so a change in one resource that's not normally triggered ensures the other resource is refreshed/reloaded?cheers
b
If the ConfigMap is declared as a dependency of the Deployment (or whatever), it will update the Deployment defintion, because it needs to the config map reference. This will restart all the pods.
So yes.
q
I don't believe Pulumi does treat them as immutable, I've never seen that behaviour.
b
There’s some discussion in the Slack as well
To be clear, it’s treating them logically as immutable, not
immutable
in the new Kubernetes capability sense.
o
@chilly-plastic-75584 this is a quirk of the Pulumi provider & I think Itay linked to the right thread. The Kubernetes provider treats them as immutable, and will generate a Replace action, but if you let Kubernetes autoname the provider, by not setting
metadata.spec.name
, changes to that value are treated as an update. If you set
metadata.spec.name
we generate a Replace on consumers, which can cause the unexpected behavior of deleting & recreating a Deployment, StatefulSet, etc.
🙌 1
c
Updated discussion and posting in here too. If the configmap has a new value and I'm loading this into a container as a config file, I want the pod to be redeployed. So I'll check through this thread and see if that's possible by not setting name or such.
b
It should automatically happen if you don’t manually name the configmap - this is the default behavior.
👍 1
c
Yeah I'm doing this:
Copy code
cfgMap, err = corev1.NewConfigMap(ctx, configData.ServiceConfigMapName(), &corev1.ConfigMapArgs{
		Metadata: &metav1.ObjectMetaArgs{
			Labels: configData.AppPulumiStringMap(),
		},
		Data:      pulumi.StringMap{configData.Configdatamapname: pulumi.String(string(marshalledConfigVals))},
		Immutable: pulumi.Bool(true),
	}, pulumi.Provider(prov))
I'm not setting metadata name explictly
So I still see the autonamed prefix even if the 2nd param is setting the configmap name
sorry suffix
Right now it's doing
myapi-project-adlkfsli
style. It just isn't making the configmap be rebuilt when the marshalled string has a different value.
o
You should see that the suffix changes if you change the value in "data"
c
It doesn't. Could I be conflicting with
--refresh
?
b
What happens if you do
pulumi preview
- does it show the config map as needing an update?
c
Like it's ignoring?
• If I don't use --refresh and delete the configmap it won't redeploy it. • If i do use refresh it recognizes it's missing and redeploys
But if refresh is saying "stop tracking" for some reason then that's a conflict i guess
b
refresh
seems orthogonal here.
o
Deleting the configmap?
c
Trying to test so manually delete. Refresh is required so it knows it's missing from target environment (at least when I tried last time)
o
ah... you shouldn't be manually deleting it, that will indeed cause issues
b
If you deploy your stack (which creates the configmap) using
pulumi up
, then update your config, then run
pulumi up
again - what happens?
c
I love orthogons. My favorite shape
o
you should be able to just change the value of the data property on the configmap, & Pulumi will take care of creating a new configmap & deleting the old
c
ok will try had been using watch
and up on occasion. will try just up
o
if you delete a managed resource, how we handle that of course can vary but I wouldn't be surprised if that code path doesn't cause us to generate a new
metadata.name
or something like that
c
So running
mage pulumi:diff myproject dev
(i love mage)... I get a new configmap
I will try to run and see if it works now
Maybe watch doesn't work for this for some reason
That worked! Ok, so running watch --refresh didn't give the right workflow. I'm going to remove --refresh and see if it changes behavior
o
I honestly am not sure how watch and --refresh interact, I'd need to dig deeper. It doesn't refresh between every deploy does it?
c
it refreshes but not the configmap + the associated resources. Using up
Copy code
Resources:
    ~ 2 updated
    +-1 replaced
    3 changes. 6 unchanged

Duration: 16s
that works!
o
awesome! glad that works
c
Boom. Watch worked without refresh. Recreated configmap + redployed. The refresh doesn't work like I thought it would. I'll summarize my result on the associated github discussion. This refresh behavior definitely could use some more writing. It's far different than I expected especially this case of ignoring changed fields. Makes sense for kubernetes automated fields I guess, but totally unexpected behavior for me when expecting it to fix and align resources to my definition.
o
@chilly-plastic-75584 what's your "pulumi:diff" command perform? based on the comments so far I wonder if you've been using --refresh where you actually want "preview"?
pulumi refresh
and the refresh stage of
pulumi up --refresh
takes resources in your stack, as they existed at the last deployment, and compares them to what's currently deployed. It's basically doing
kubectl get config ...
and refreshing the state file.
also diff there.
o
okay, yeah, I see you're doing a pulumi preview
c
I expect refresh to do what you said and then to make sure anything different in my plan is updated to what it should be in code.
b
refresh won’t run an
up
- it’ll just edit the local state
c
Right. I was combining
watch --refresh
o
plan isn't a term I'm familiar with here, but I think the disconnect is that Pulumi manages a state file, you can inspect it via:
pulumi stack export --file stack.json
I think adding
--refresh
causes us to run a "refresh step" before other commands, e.g.:
pulumi up --refresh
is equivalent to
pulumi refresh; pulumi up
c
I wanted it detect the resource was gone or different and got things mixed up. K8 and pulumi is complicated 🙂 Well K8 is complicated by itself
o
I think
pulumi watch --refresh
is just
pulumi refresh; pulumi watch
(or if you're familiar with nodemon, watchexec, it's like
pulumi refresh; watchexec pulumi up
)
b
@orange-policeman-59119 I’m surprised that
refresh
would have any impact here, unless the resource was already updated on k8s.
o
I agree, I think you're right but I'm not an expert yet on it