https://pulumi.com logo
Title
w

wonderful-portugal-96162

02/17/2022, 4:17 PM
Hello all, just getting started and I ran into some behavior that I have questions on. 🧵
I originally was working with the Pulumi Kubernetes Operator, but I found the same behavior with
pulumi up
(doing it manually). I have this sample code that deploys a Deployment with one replica.
pulumi up --stack dev -y --refresh=true
does what's expected.
$ k get pods
NAME                              READY   STATUS    RESTARTS   AGE
nginx-fsbqexby-6799fc88d8-ch2b8   1/1     Running   0          42m
I delete the deployment and run
pulumi up --stack dev -y --refresh=true
again and it creates the deployment as expected
$ k get pods
NAME                              READY   STATUS              RESTARTS   AGE
nginx-w88w29hu-6799fc88d8-w4bvr   0/1     ContainerCreating   0          0s
Cool, let me scale the deployment (my cluster now has drift)
$ k scale deployment nginx-w88w29hu --replicas=3
deployment.apps/nginx-w88w29hu scaled

$ k get pods
NAME                              READY   STATUS              RESTARTS   AGE
nginx-w88w29hu-6799fc88d8-fdhkh   0/1     ContainerCreating   0          2s
nginx-w88w29hu-6799fc88d8-kq4pp   1/1     Running             0          2s
nginx-w88w29hu-6799fc88d8-w4bvr   1/1     Running             0          63s
Running
pulumi up --stack dev -y --refresh=true
again, doesn't do anything. It gives me
Resources: 3 unchanged
and the 3 replicas are still there.
$ k get pods
NAME                              READY   STATUS    RESTARTS   AGE
nginx-w88w29hu-6799fc88d8-fdhkh   1/1     Running   0          90s
nginx-w88w29hu-6799fc88d8-kq4pp   1/1     Running   0          90s
nginx-w88w29hu-6799fc88d8-w4bvr   1/1     Running   0          2m31s
My expectation, is that
pulumi up
would detect the drift and correct it for me.
What's strange is that if I update my
main.go
file with
appDeploymentReplicas := 2
... running
pulumi up --stack dev -y --refresh=true
actually DOES correct the drift
$ k get pods
NAME                              READY   STATUS    RESTARTS   AGE
nginx-w88w29hu-6799fc88d8-kq4pp   1/1     Running   0          4m31s
nginx-w88w29hu-6799fc88d8-w4bvr   1/1     Running   0          5m32s
Am I missing something?
p

prehistoric-activity-61023

02/17/2022, 4:36 PM
Hmm, it looks to me like pulumi refresh does not update the number of replicas.
You mentioned that it does see the drift when you update the source code but in fact, I’d say it does see the difference between your expected state (source code) vs last known state.
We’ve got: •
expected state
(source code) •
state
(stored in Pulumi Service or custom backends) •
current state
(actual state of your infrastructure)
pulumi up
= compare
expected state
with
state
and perform actions to sync them
pulumi refresh
= compare
state
with
current state
and allow to overwrite
state
in case of differences
so the only logical solution explaining the described scenario is that
pulumi refresh
can detect that deployment was removed but it doesn’t update
replicaCount
(so if the deployment is scaled, it doesn’t see any changes) - let me try to repro this
b

busy-journalist-6936

02/17/2022, 4:44 PM
šŸ‘€
w

wonderful-portugal-96162

02/17/2022, 4:45 PM
@prehistoric-activity-61023 Correct, that's what I'm experiencing. I originally saw this in the Pulumi Kubernetes Operator and thought I set up something wrog but I was able to reproduce it with
pulumi
p

prehistoric-activity-61023

02/17/2022, 4:46 PM
hah, I wasn’t able to repro this
what I tried: •
pulumi new kubernetes-python
•
pulumi up
-> it created a new deployment with one replica • I changed the source code (increased number of replicas to 2) •
pulumi up
-> it scaled the deployment • I manually descreased the number of replicas via:
k scale deploy nginx-biohg2sj --replicas=1
•
pulumi up
-> nothing to do here (expected) •
pulumi refresh
-> it properly detects that the actual state differs from what it thinks it should be šŸ¤”
Type                              Name                   Plan       Info
     pulumi:pulumi:Stack               pulumi_playground-dev             
 ~   └─ kubernetes:apps/v1:Deployment  nginx                  update     [diff: ~metadata,spec,status]
...
      ~ spec      : {
            progressDeadlineSeconds: 600
          ~ replicas               : 2 => 1
...
I’m pretty sure once I accept the
refresh
action and do
pulumi up
, it’s gonna correct the drift.
or… not? ā“
Wow, that was unexpected šŸ˜„ Let me dig further.
w

wonderful-portugal-96162

02/17/2022, 4:59 PM
Ah okay, so I wasn't crazy (well maybe but not about this)
p

prehistoric-activity-61023

02/17/2022, 5:00 PM
there is a couple of ways to perform kubernetes diff and the one that’s used right now is not able to detect such changes
once the server-side diff will be used, it should properly detect such drift in configuration
@wonderful-portugal-96162 can you try to run the
pulumi up
with additional env var?
PULUMI_K8S_ENABLE_DRY_RUN=true pulumi up
w

wonderful-portugal-96162

02/17/2022, 5:10 PM
@prehistoric-activity-61023 Yup, that works exactly as expected šŸ™Œ
p

prehistoric-activity-61023

02/17/2022, 5:12 PM
I guess there are still some cases that won’t be caught with this method (try to add a new label using `kubectl edit`; if you expect that pulumi will remove it, you might be surprised)
I’m not sure it should be considered a bug. That’s one of the parts of k8s that I don’t fully understand (yet) - calculating diffs. Apparently there are a couple of different ways how to calculate a diff and it heavily depends on how the changes are applied (using declarative way with
kubectly apply
, via
kubectl edit
, via
kubectl patch
).
w

wonderful-portugal-96162

02/17/2022, 5:16 PM
p

prehistoric-activity-61023

02/17/2022, 5:16 PM
It seems that
PULUMI_K8S_ENABLE_DRY_RUN
enables so called server-side diff and that’s why it can detect changes introduced locally (at least some of them?). AFAIK before that, a ā€œmagic annotationā€
<http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>
was used and that’s why some changes remain undetected (because they didn’t update this annotation).
w

wonderful-portugal-96162

02/17/2022, 5:17 PM
@prehistoric-activity-61023 Do you know if I can set
PULUMI_K8S_ENABLE_DRY_RUN
on the Pulumi Kubrentes Operator?
p

prehistoric-activity-61023

02/17/2022, 5:17 PM
I think here’s the main ticket: https://github.com/pulumi/pulumi-kubernetes/issues/1556
No idea if it’s gonna work with k8s operator - sorry šŸ˜ž. Additionally, I’m not sure how safe it is to use it now (remember, it’s not enabled by default and that’s a sign).
enableDryRun
BETA FEATURE - If present and set to true, enable server-side diff calculations
w

wonderful-portugal-96162

02/17/2022, 5:20 PM
Ah so
pulumi config set kubernetes:enableDryRun
b

busy-journalist-6936

02/17/2022, 5:23 PM
pulumi config set \
  --stack dev kubernetes:enableDryRun true
šŸ™ 1
w

wonderful-portugal-96162

02/17/2022, 5:52 PM
Setting that value works as expected šŸ™Œ