pulumi suggests replacing k8s configmap even if on...
# general
e
pulumi suggests replacing k8s configmap even if only
data
section is updated. is it a known issue or expected behavior? Can't find it issues on github.
Copy code
kubernetes:core:ConfigMap  argocd-cm                 replace     [diff: ~data]
c
this is the the designed behavior.
updating a configmap in place will cause the data to be synchronized to the pods that reference it based on when the TTL expires.
by marking
data
as a field that requires replacement, pulumi will orchestrate an operation that creates the new configmap, updates all references to it, and then when those updates succeed, deletes the old configmap.
e
@creamy-potato-29402 thanks for the reply! The thing is that I explicitly define
metadata.name
for the configmap. That's why I expect pulumi to update it in place as kubectl would do. Pulumi does this (in-place update) then I use
ConfigFile
, why it behaves differently in case of
ConfigMap
? Also, it fails during replace, because it tries to create new configmap with the same name before deleting the old one. Actually, replace would even work for me, but this replace failure breaks things completely.
IMO, k8s objects already have this end-state declarative nature, it makes no sense to delete it for updating.
c
it does not do in-place update for
ConfigFile
, it should definitely replace it it does in any other case. It should also never try to create an object with the same name before deleting the old one. do you have more details on this?
e
yep, I can give you exact resource definition
c
what version of the kubernetes package and the CLI are you using
As for whether this is the right decision—I would say the stock k8s update strategy for configmaps, which is updating pods “randomly” after the kubelet TTL expires and syncs the state is a massive footgun, and it confuses ~everyone who learns of it the first time.
It also makes every update unstable. what happens if the deployment controller rolls back? if you’ve updated your configmap, you can’t. you just get random failures.
e
here I totally agree with you
c
I think we’re open to re-litigating that decision, but we never really hear complaints about that, so it would be unusual!
the create-and-delete thing though, that sounds like a bug.
e
pulumi v1.3.3
as for k8s, I have two of them listed
Copy code
kubernetes  resource  1.2.3    48 MB  n/a        3 hours ago
kubernetes  resource  1.1.0    53 MB  n/a        3 hours ago
not sure what it means
c
I meant in your package.json or your pip file
e
"@pulumi/kubernetes": "^1.1.0",
c
1.2.3 is what you want.
ah yes
e
ok, let me try to update
c
this sounds like a bug in 1.1
e
in this particular case software is smart enough to watch cm changes (it's argo-cd) - that's why I don't care much about this things with references to cm from deployments
c
our thought was, if people asked for it, we could eventually expose the option to turn this off.
no one ever asked, though.
e
does it work the same way for other resources, or only for secrets/configmaps on data change?
c
secrets and configmaps are replaced if you change data or stringdata. Other resources are replaced only if you change immutable values.
so you can’t change a deployment’s
.spec.selector
, the API server will say “you can’t” because it’s immutable.
in this case we’ll schedule a replace.
e
yep, upgrading to 1.2.3 solved the issue with creating replacement before deleting original. Thanks!
c
I’m glad to hear
please don’t be shy if you see other issues!
we’d love to fix them
e
I'm a big fun of pulumi, so please expect new questions from me 🙂
c
always happy to help