https://pulumi.com logo
#pulumi-kubernetes-operator
Title
# pulumi-kubernetes-operator
b

brainy-window-77332

01/20/2022, 2:16 PM
Does the operator support drift detection?
s

sparse-park-68967

01/21/2022, 1:24 AM
Could you expand what you mean by drift detection? You want the defined stack settings to stomp on external modifications automatically?
b

brainy-window-77332

01/26/2022, 11:38 AM
Sorry for delay, distracted by client work, yes, If any changes are made to the infrastructure managed via Pulumi I'd want it to be restored to the state defined in the pulumi source code/yaml that the operator is applied. Obviously you'd want this to be an optional feature that could be enabled.
s

sparse-park-68967

01/26/2022, 9:19 PM
So we have a means to do this now - but I haven't tried it explicitly. You can set the new
continueResyncOnCommitMatch
field on a stack and track a branch instead of a specific commit. This will cause the stack to be resynced periodically even if desired state specified in the HEAD commit on that branch has been reached. You can also set the
resyncFrequencySeconds
field to set the frequency with which this resync should occur. You will want to enable the
refresh
option to force pulumi to refresh its state from the backing resource during a resync to detect a drift. After which it should assert the state backed to the desired state in the backing repository/branch
👍 1
w

wonderful-portugal-96162

02/17/2022, 3:16 AM
So I came here to ask this very question and I'm trying these settings but it's not changing it back to the desired state
Copy code
apiVersion: <http://pulumi.com/v1|pulumi.com/v1>
kind: Stack
metadata:
  name: nginx-k8s-stack
spec:
  envRefs:
    PULUMI_ACCESS_TOKEN:
      type: Secret
      secret:
        name: pulumi-api-secret
        key: accessToken
  stack: christianh814/quickstart/dev
  projectRepo: <https://github.com/christianh814/pulumi-k8s-operator-example>
  branch: "refs/heads/main"
  destroyOnFinalize: true
  retryOnUpdateConflict: true
  continueResyncOnCommitMatch: true
  resyncFrequencySeconds: 30
  refresh: true
The code (golang) deploys the NGINX deployment with one replica. But when I scale it to 3...
Copy code
k scale deployment nginx-j2wvuiws --replicas=3
It never goes back down to 1
Copy code
$ k get pods -l app=nginx
NAME                              READY   STATUS    RESTARTS   AGE
nginx-j2wvuiws-85b98978db-dx4l7   1/1     Running   0          5m43s
nginx-j2wvuiws-85b98978db-sr8m8   1/1     Running   0          6m47s
nginx-j2wvuiws-85b98978db-zljgg   1/1     Running   0          5m43s
Running
pulumi stack history
shows that it is running (every 30 seconds like I put in my config above). Am I missing something?
So I saw the same behavior with
pulumi up
so maybe this is a generic thing... https://pulumi-community.slack.com/archives/C84L4E3N1/p1645114639793099
Just an update, doing
pulumi config set kubernetes:enableDryRun true --stack dev
fixed my issue (until that's the default behavior)
q

quiet-wolf-18467

02/18/2022, 8:03 AM
I’m not really sure why that fixed it, I wouldn’t expect that to change anything
w

wonderful-portugal-96162

02/18/2022, 2:52 PM
I'm not 100% sure either. Not sure how Pulumi calculates diff. If it uses the same 3way diff Kuberentes does or if it uses something else. But the behavior is/was. Without that config, I experienced the above behavior.
s

sparse-park-68967

02/18/2022, 10:11 PM
Sorry I missed this - yes dry run would help here. See https://github.com/pulumi/pulumi-kubernetes/issues/694
q

quiet-wolf-18467

02/19/2022, 4:29 PM
TIL
pulumi logo 1