After Pulumi 3.47.2, all our Pulumi Kubernetes res...
# general
q
After Pulumi 3.47.2, all our Pulumi Kubernetes resources are throwing errors that they violate the plan. And after the first error, they increase on the second run.
resource urn:pulumi:dev::us-cluster-1::kubernetes:<http://helm.sh/v3:Release::kube-prometheus-stack|helm.sh/v3:Release::kube-prometheus-stack> violates plan: properties changed: ++name[{kube-prometheus-stack-63216a72}!={kube-prometheus-stack-73270e19}], ++resourceNames[{map[<http://Alertmanager.monitoring.coreos.com/monitoring.coreos.com/v1:{[{monitoring/kube-prometheus-stack-6321-alertmanager}]}|Alertmanager.monitoring.coreos.com/monitoring.coreos.com/v1:{[{monitoring/kube-prometheus-stack-6321-alertmanager}]}>
<http://ClusterRole.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1:{[{kube-prometheus-stack-6321-admission}|ClusterRole.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1:{[{kube-prometheus-stack-6321-admission}> {kube-prometheus-stack-6321-operator} {kube-prometheus-stack-6321-prometheus} {kube-prometheus-stack-63216a72-grafana-clusterrole} {kube-prometheus-stack-63216a72-kube-state-metrics}]} <http://ClusterRoleBinding.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1:{[{kube-prometheus-stack-6321-admission}|ClusterRoleBinding.rbac.authorization.k8s.io/rbac.authorization.k8s.io/v1:{[{kube-prometheus-stack-6321-admission}> {kube-prometheus-stack-6321-operator} {kube-prometheus-stack-6321-prometheus} {kube-prometheus-stack-63216a72-grafana-clusterrolebinding} {kube-prometheus-stack-63216a72-kube-state-metrics}]} ConfigMap/v1:{[{monitoring/kube-prometheus-stack-6321-alertmanager-overview} {monitoring/kube-prometheus-stack-6321-apiserver} {monitoring/kube-prometheus-stack-6321-cluster-total} {monitoring/kube-prometheus-stack-6321-controller-manager} {monitoring/kube-prometheus-stack-6321-etcd} {monitoring/kube-prometheus-stack-6321-grafana-datasource} {monitoring/kube-prometheus-stack-6321-grafana-overview} {monitoring/kube-prometheus-stack-6321-k8s-coredns} {monitoring/kube-prometheus-stack-6321-k8s-resources-cluster} {monitoring/kube-prometheus-stack-6321-k8s-resources-namespace} {monitoring/kube-prometheus-stack-6321-k8s-resources-node} {monitoring/kube-prometheus-stack-6321-k8s-resources-pod} {monitoring/kube-prometheus-stack-6321-k8s-resources-workload} {monitoring/kube-prometheus-stack-6321-k8s-resources-workloads-namespace} {monitoring/kube-prometheus-stack-6321-kubelet} {monitoring/kube-prometheus-stack-6321-namespace-by-pod} {monitoring/kube-prometheus-stack-6321-namespace-by-workload} {monitoring/kube-prometheus-stack-6321-node-cluster-rsrc-use} {monitoring/kube-prometheus-stack-6321-node-rsrc-use} {monitoring/kube-prometheus-stack-6321-nodes} {monitoring/kube-prometheus-stack-6321-nodes-darwin} {monitoring/kube-prometheus-stack-6321-persistentvolumesusage} {monitoring/kube-prometheus-stack-6321-pod-total} {monitoring/kube-prometheus-stack-6321-prometheus} {monitoring/kube-prometheus-stack-6321-proxy} {monitoring/kube-prometheus-stack-6321-scheduler} {monitoring/kube-prometheus-stack-6321-workload-total} {monitoring/kube-prometheus-stack-63216a72-grafana} {monitoring/kube-prometheus-stack-63216a72-grafana-config-dashboards} {monitoring/kube-prometheus-stack-63216a72-grafana-test}]} <http://CustomResourceDefinition.apiextensions.k8s.io/apiextensions.k8s.io/v1:{[{alertmanagerconfigs.monitoring.coreos.com}|CustomResourceDefinition.apiextensions.k8s.io/apiextensions.k8s.io/v1:{[{alertmanagerconfigs.monitoring.coreos.com}>
And they continue... They go over the Slack message limit. After reverting to 3.47.1, everything is fine again.