After upgrading to the latest CLI, each `pulumi up...
# general
i
After upgrading to the latest CLI, each
pulumi up
began showing this diff:
Copy code
~ kubernetes:policy/v1beta1:PodSecurityPolicy: (update)
    [id=prmetheus-operator-kube-state-metrics]
    [urn=urn:pulumi:staging-account::staging-account::...:account$...:eksCluster$aws:eks/cluster:Cluster$...:eksBasics$...:prometheus$kubernetes:<http://helm.sh/v2:Chart$kubernetes:policy/v1beta1:PodSecurityPolicy::monitoring/prmetheus-operator-kube-state-metrics]|helm.sh/v2:Chart$kubernetes:policy/v1beta1:PodSecurityPolicy::monitoring/prmetheus-operator-kube-state-metrics]>
    [provider=urn:pulumi:staging-account::staging-account::...:account$...:eksCluster$pulumi:providers:kubernetes::k8s::989bfcae-4c24-4ad3-aba9-d1620215508b]
  ~ spec: {
      + hostIPC               : false
      + hostNetwork           : false
      + hostPID               : false
      + privileged            : false
      + readOnlyRootFilesystem: false
    }
~ kubernetes:policy/v1beta1:PodSecurityPolicy: (update)
    [id=prmetheus-operator-prometh-alertmanager]
    [urn=urn:pulumi:staging-account::staging-account::...:account$...a:eksCluster$aws:eks/cluster:Cluster$...:eksBasics$...:prometheus$kubernetes:<http://helm.sh/v2:Chart$kubernetes:policy/v1beta1:PodSecurityPolicy::monitoring/prmetheus-operator-prometh-alertmanager]|helm.sh/v2:Chart$kubernetes:policy/v1beta1:PodSecurityPolicy::monitoring/prmetheus-operator-prometh-alertmanager]>
    [provider=urn:pulumi:staging-account::staging-account::...:account$...:eksCluster$pulumi:providers:kubernetes::k8s::989bfcae-4c24-4ad3-aba9-d1620215508b]
  ~ spec: {
      + hostIPC               : false
      + hostNetwork           : false
      + hostPID               : false
      + privileged            : false
      + readOnlyRootFilesystem: false
    }
Any idea how to get rid of those diffs (and the redundant updates it triggers)?
c
@incalculable-diamond-5088 this will be fixed soon.
this is probably the spurious diff bug.
i
@creamy-potato-29402 Thanks for the response! Is there any ETA on that? Or does switching to an older
@pulumi/kubernetes
fix this bug?
p
Same here 😞
c
@incalculable-diamond-5088 @plain-businessperson-30883 switch back to an older version for now. cc @microscopic-florist-22719
👍 1
we’re hoping to land it tomorrow.
❤️ 2
i
@white-balloon-205 I meant to write it here. See the diff above