https://pulumi.com logo
Title
i

important-holiday-25047

11/23/2022, 2:14 PM
Hi, I am having a problem regarding a pulumi up with helm charts: First up: Everything worked fine on friday when we last deployed. Problem: If I run the pulumi up in our pipeline our rabbitmq helm chart does not update with the following error:
development/rabbitmq violates plan: properties changed:
First it was that the erlang cookie changed, after a pulumi refresh its more:
error: resource urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet::development/rabbitmq|helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet::development/rabbitmq> violates plan: properties changed: ~~spec[{map[podManagementPolicy:{OrderedReady} replicas:{1} selector:{map[matchLabels:{map[<http://app.kubernetes.io/instance:{rabbitmq}|app.kubernetes.io/instance:{rabbitmq}>
<http://app.kubernetes.io/name:{rabbitmq}]}]}|app.kubernetes.io/name:{rabbitmq}]}]}> serviceName:{rabbitmq-headless} template:{map[metadata:{map[annotations:{map[checksum/config:{105414d1c6b687cc6720aaa6aeabae8605b2c47f60956643a158b8795a9fee05} checksum/secret:{0c8c4dcfdfcceeb8d55ca3d08ee5fa2815b03c1505fa60a83406922bb2dd8428}]} labels:{map[<http://app.kubernetes.io/instance:{rabbitmq}|app.kubernetes.io/instance:{rabbitmq}>
<http://app.kubernetes.io/managed-by:{Helm}|app.kubernetes.io/managed-by:{Helm}>
<http://app.kubernetes.io/name:{rabbitmq}|app.kubernetes.io/name:{rabbitmq}> <http://helm.sh/chart:{rabbitmq-11.1.1}]|helm.sh/chart:{rabbitmq-11.1.1}]>}]} spec:{map[affinity:{map[nodeAffinity:{map[requiredDuringSchedulingIgnoredDuringExecution:{map[nodeSelectorTerms:{[{map[matchExpressions:{[{map[key:{performance} operator:{In} values:{[{slow}]}]}]}]}]}]}]}]} containers:{[{map[env:{[{map[name:{BITNAMI_DEBUG} value:{false}]} {map[name:{MY_POD_IP} valueFrom:{map[fieldRef:{map[fieldPath:{status.podIP}]}]}]} {map[name:{MY_POD_NAME} valueFrom:{map[fieldRef:{map[fieldPath:{metadata.name}]}]}]} {map[name:{MY_POD_NAMESPACE} valueFrom:{map[fieldRef:{map[fieldPath:{metadata.namespace}]}]}]} {map[name:{K8S_SERVICE_NAME} value:{rabbitmq-headless}]} {map[name:{K8S_ADDRESS_TYPE} value:{hostname}]} {map[name:{RABBITMQ_FORCE_BOOT} value:{no}]} {map[name:{RABBITMQ_NODE_NAME} value:{rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local}]} {map[name:{K8S_HOSTNAME_SUFFIX} value:{.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local}]} {map[name:{RABBITMQ_MNESIA_DIR} value:{/bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)}]} {map[name:{RABBITMQ_LDAP_ENABLE} value:{no}]} {map[name:{RABBITMQ_LOGS} value:{-}]} {map[name:{RABBITMQ_ULIMIT_NOFILES} value:{65536}]} {map[name:{RABBITMQ_USE_LONGNAME} value:{true}]} {map[name:{RABBITMQ_ERL_COOKIE} valueFrom:{map[secretKeyRef:{map[key:{rabbitmq-erlang-cookie} name:{rabbitmq}]}]}]} {map[name:{RABBITMQ_LOAD_DEFINITIONS} value:{no}]} {map[name:{RABBITMQ_DEFINITIONS_FILE} value:{/app/load_definition.json}]} {map[name:{RABBITMQ_SECURE_PASSWORD} value:{yes}]} {map[name:{RABBITMQ_USERNAME} value:{user}]} {map[name:{RABBITMQ_PASSWORD} valueFrom:{map[secretKeyRef:{map[key:{rabbitmq-password} name:{rabbitmq}]}]}]} {map[name:{RABBITMQ_PLUGINS} value:{rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_delayed_message_exchange}]} {map[name:{RABBITMQ_COMMUNITY_PLUGINS} value:{<https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/3.11.1/rabbitmq_delayed_message_exchange-3.11.1.ez}]}]}> image:{<http://marketplace.azurecr.io/bitnami/rabbitmq:3.11.2-debian-11-r0|marketplace.azurecr.io/bitnami/rabbitmq:3.11.2-debian-11-r0>} imagePullPolicy:{IfNotPresent} lifecycle:{map[preStop:{map[exec:{map[command:{[{/bin/bash} {-ec} {if [[ -f /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh ]]; then /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh -t "120" -d "false" else rabbitmqctl stop_app fi }]}]}]}]} livenessProbe:{map[exec:{map[command:{[{/bin/bash} {-ec} {rabbitmq-diagnostics -q ping}]}]} failureThreshold:{6} initialDelaySeconds:{120} periodSeconds:{30} successThreshold:{1} timeoutSeconds:{20}]} name:{rabbitmq} ports:{[{map[containerPort:{5672} name:{amqp}]} {map[containerPort:{25672} name:{dist}]} {map[containerPort:{15672} name:{stats}]} {map[containerPort:{4369} name:{epmd}]}]} readinessProbe:{map[exec:{map[command:{[{/bin/bash} {-ec} {rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms}]}]} failureThreshold:{3} initialDelaySeconds:{10} periodSeconds:{30} successThreshold:{1} timeoutSeconds:{20}]} resources:{map[limits:{map[]} requests:{map[]}]} securityContext:{map[runAsNonRoot:{true} runAsUser:{1001}]} volumeMounts:{[{map[mountPath:{/bitnami/rabbitmq/conf} name:{configuration}]} {map[mountPath:{/bitnami/rabbitmq/mnesia} name:{data}]}]}]}]} securityContext:{map[fsGroup:{1001}]} serviceAccountName:{rabbitmq} terminationGracePeriodSeconds:{120} volumes:{[{map[name:{configuration} secret:{map[items:{[{map[key:{rabbitmq.conf} path:{rabbitmq.conf}]}]} secretName:{rabbitmq-config}]}]} {map[emptyDir:{map[]} name:{data}]}]}]}]} updateStrategy:{map[type:{RollingUpdate}]}]}!={map[podManagementPolicy:{OrderedReady} replicas:{1} selector:{map[matchLabels:{map[<http://app.kubernetes.io/instance:{rabbitmq}|app.kubernetes.io/instance:{rabbitmq}>
<http://app.kubernetes.io/name:{rabbitmq}]}]}|app.kubernetes.io/name:{rabbitmq}]}]}> serviceName:{rabbitmq-headless} template:{map[metadata:{map[annotations:{map[checksum/config:{105414d1c6b687cc6720aaa6aeabae8605b2c47f60956643a158b8795a9fee05} checksum/secret:{bc2150c097b4a1af2d5acd584ff944c61404fe43403d0f1aeeff50fba12d4c42}]} labels:{map[<http://app.kubernetes.io/instance:{rabbitmq}|app.kubernetes.io/instance:{rabbitmq}>
<http://app.kubernetes.io/managed-by:{Helm}|app.kubernetes.io/managed-by:{Helm}>
<http://app.kubernetes.io/name:{rabbitmq}|app.kubernetes.io/name:{rabbitmq}> <http://helm.sh/chart:{rabbitmq-11.1.1}]|helm.sh/chart:{rabbitmq-11.1.1}]>}]} spec:{map[affinity:{map[nodeAffinity:{map[requiredDuringSchedulingIgnoredDuringExecution:{map[nodeSelectorTerms:{[{map[matchExpressions:{[{map[key:{performance} operator:{In} values:{[{slow}]}]}]}]}]}]}]}]} containers:{[{map[env:{[{map[name:{BITNAMI_DEBUG} value:{false}]} {map[name:{MY_POD_IP} valueFrom:{map[fieldRef:{map[fieldPath:{status.podIP}]}]}]} {map[name:{MY_POD_NAME} valueFrom:{map[fieldRef:{map[fieldPath:{metadata.name}]}]}]} {map[name:{MY_POD_NAMESPACE} valueFrom:{map[fieldRef:{map[fieldPath:{metadata.namespace}]}]}]} {map[name:{K8S_SERVICE_NAME} value:{rabbitmq-headless}]} {map[name:{K8S_ADDRESS_TYPE} value:{hostname}]} {map[name:{RABBITMQ_FORCE_BOOT} value:{no}]} {map[name:{RABBITMQ_NODE_NAME} value:{rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local}]} {map[name:{K8S_HOSTNAME_SUFFIX} value:{.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local}]} {map[name:{RABBITMQ_MNESIA_DIR} value:{/bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)}]} {map[name:{RABBITMQ_LDAP_ENABLE} value:{no}]} {map[name:{RABBITMQ_LOGS} value:{-}]} {map[name:{RABBITMQ_ULIMIT_NOFILES} value:{65536}]} {map[name:{RABBITMQ_USE_LONGNAME} value:{true}]} {map[name:{RABBITMQ_ERL_COOKIE} valueFrom:{map[secretKeyRef:{map[key:{rabbitmq-erlang-cookie} name:{rabbitmq}]}]}]} {map[name:{RABBITMQ_LOAD_DEFINITIONS} value:{no}]} {map[name:{RABBITMQ_DEFINITIONS_FILE} value:{/app/load_definition.json}]} {map[name:{RABBITMQ_SECURE_PASSWORD} value:{yes}]} {map[name:{RABBITMQ_USERNAME} value:{user}]} {map[name:{RABBITMQ_PASSWORD} valueFrom:{map[secretKeyRef:{map[key:{rabbitmq-password} name:{rabbitmq}]}]}]} {map[name:{RABBITMQ_PLUGINS} value:{rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_delayed_message_exchange}]} {map[name:{RABBITMQ_COMMUNITY_PLUGINS} value:{<https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/3.11.1/rabbitmq_delayed_message_exchange-3.11.1.ez}]}]}> image:{<http://marketplace.azurecr.io/bitnami/rabbitmq:3.11.2-debian-11-r0|marketplace.azurecr.io/bitnami/rabbitmq:3.11.2-debian-11-r0>} imagePullPolicy:{IfNotPresent} lifecycle:{map[preStop:{map[exec:{map[command:{[{/bin/bash} {-ec} {if [[ -f /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh ]]; then /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh -t "120" -d "false" else rabbitmqctl stop_app fi }]}]}]}]} livenessProbe:{map[exec:{map[command:{[{/bin/bash} {-ec} {rabbitmq-diagnostics -q ping}]}]} failureThreshold:{6} initialDelaySeconds:{120} periodSeconds:{30} successThreshold:{1} timeoutSeconds:{20}]} name:{rabbitmq} ports:{[{map[containerPort:{5672} name:{amqp}]} {map[containerPort:{25672} name:{dist}]} {map[containerPort:{15672} name:{stats}]} {map[containerPort:{4369} name:{epmd}]}]} readinessProbe:{map[exec:{map[command:{[{/bin/bash} {-ec} {rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms}]}]} failureThreshold:{3} initialDelaySeconds:{10} periodSeconds:{30} successThreshold:{1} timeoutSeconds:{20}]} resources:{map[limits:{map[]} requests:{map[]}]} securityContext:{map[runAsNonRoot:{true} runAsUser:{1001}]} volumeMounts:{[{map[mountPath:{/bitnami/rabbitmq/conf} name:{configuration}]} {map[mountPath:{/bitnami/rabbitmq/mnesia} name:{data}]}]}]}]} securityContext:{map[fsGroup:{1001}]} serviceAccountName:{rabbitmq} terminationGracePeriodSeconds:{120} volumes:{[{map[name:{configuration} secret:{map[items:{[{map[key:{rabbitmq.conf} path:{rabbitmq.conf}]}]} secretName:{rabbitmq-config}]}]} {map[emptyDir:{map[]} name:{data}]}]}]}]} updateStrategy:{map[type:{RollingUpdate}]}]}]
If I run the pulumi up locally a rabbitmq pod is created and can be viewed in kubernetes, but the pod still has errors, even though it is marked as running:
Warning  Unhealthy  38s                kubelet            Readiness probe failed: Error:
RabbitMQ on node rabbit@rabbitmq-0.rabbitmq-headless.development.svc.cluster.local is not running or has not fully booted yet (check with is_booting)
Locally the redis pod is stuck in the pulumi up with:
Finding Pods to direct traffic to
Does anybody have any idea what could be wrong here?
m

many-telephone-49025

11/23/2022, 2:19 PM
Do you have by chance the full Pulumi program in a public repo so i can have a look?
i

important-holiday-25047

11/23/2022, 2:24 PM
Sadly not, no
We also did not change anything since friday when everything worked.
So I doubt that it has anything to do with the pulumi code per sei
m

many-telephone-49025

11/23/2022, 2:25 PM
Was there some updates in the time: • provider • helm?
i

important-holiday-25047

11/23/2022, 2:25 PM
I checked the helm and tried to fix the version to a date before last friday, but sadly without any luck
on redis there was no update since friday
provider, you mean the image where the pulumi runs in? I have to check, we create a new image every night to have the most recent updates always applied
m

many-telephone-49025

11/23/2022, 2:27 PM
Did you try a
pulumi refresh
i

important-holiday-25047

11/23/2022, 2:27 PM
I did, after the pulumi refresh the huge error message appeared (before rabbitmq solely stated that it cannot update as the erlang cookie changed)
Both do say basically the same: rabbitmq violates plan due to property changes
m

many-telephone-49025

11/23/2022, 2:28 PM
Can you send me the helm chart urls so I can try to recreate the situation. The versions would be helpful to What Pulumi supported language are you using?
i

important-holiday-25047

11/23/2022, 2:28 PM
typescript
g

gentle-librarian-84908

11/23/2022, 2:33 PM
I am having a similar issue, today our pipeline starts to fail on pulumi up stage. Every time i try to deploy new code fails on creating docker image.
docker:image:Image (bayer-cf-api-v2)
    error: resource violates plan: properties changed:
. None pulumi code was changed since yesterday and I'm using typescript also. Maybe it's correlated?
i

important-holiday-25047

11/23/2022, 2:33 PM
Chart instances would be created with: rabbit:
new kub.helm.v3.Chart("rabbitmq", {
namespace: "SomeNamespace", "rabbitmq", fetchOpts: { repo: "https://marketplace.azurecr.io/helm/v1/repo", }, auth : { password: "SomePassword", }, persistence: { enabled: false }, service: { type: "LoadBalancer" }, affinity: { nodeAffinity: { requiredDuringSchedulingIgnoredDuringExecution: { nodeSelectorTerms: [ { matchExpressions: [ { key: "performance", operator: "In", values: "slow" } ] } ] } } }, communityPlugins: "https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/3.11.1/rabbitmq_delayed_message_exchange-3.11.1.ez", extraPlugins: "rabbitmq_delayed_message_exchange", additionalPlugins:"rabbitmq_delayed_message_exchange" });
m

many-telephone-49025

11/23/2022, 2:34 PM
Hi @gentle-librarian-84908 thanks for the feedback
i

important-holiday-25047

11/23/2022, 2:36 PM
redis:
new kub.helm.v3.Chart("redis", {
namespace: "SomeNamespace",
"redis",
fetchOpts: {
repo: "<https://marketplace.azurecr.io/helm/v1/repo>",
},
values: {
auth : {
password: "SomePassword",
},
persistence: {
enabled: false
},
service: {
type:  "LoadBalancer"
},
architecture: "standalone",
master: {
service: {
type: "LoadBalancer"
},
affinity: {
nodeAffinity: {
requiredDuringSchedulingIgnoredDuringExecution: {
nodeSelectorTerms: [
{
matchExpressions: [
{
key: "performance",
operator: "In",
values: "slow"
}
]
}
]
}
}
},
persistence: {
enabled: false
},
disableCommands: []
}
});
The values were put together now by hand as we have a few methods in there so I copy pasted it together, I hope it is correct... ^^
m

many-telephone-49025

11/23/2022, 2:38 PM
Many thanks @important-holiday-25047
e

echoing-dinner-19531

11/23/2022, 2:39 PM
This is an issue with 3.47.2 https://github.com/pulumi/pulumi/issues/11444 I'm working on trying to get a new release out with the fix, it's only a problem with running with "--yes", interactive use should be fine.
i

important-holiday-25047

11/23/2022, 2:40 PM
Ah, cool, thanks for the info @echoing-dinner-19531. Do you have any idea when a new release will be ready?
e

echoing-dinner-19531

11/23/2022, 2:42 PM
I have no idea, github actions are causing us a lot of issues at the moment which is putting a go slow on everything. If you can set your pipelines to use an older pulumi version that would be the most immediate fix.
i

important-holiday-25047

11/23/2022, 2:44 PM
Ok, thanks, I will have a look into that then
I went back to version 3.46.1 now but it still has issues deploying everything:
Diagnostics:
kubernetes:helm.sh/v3:Chart$kubernetes:core/v1:Secret (development/rabbitmq) error: resource urn😛ulumi:Development:☁️:kubernetes:helm.sh/v3:Chart$kubernetes:core/v1:secret::development/rabbitmq violates plan: properties changed: ~~data[{&{{map[rabbitmq-erlang-cookie:{c1FRMDk1WDBWN0NWdkZncVFzU1V6cEt3ZmRFd2E1WTA=} rabbitmq-password:{Z1V2VFVEMUlkUDJzTmQtLUpYNFNTTFlk}]}}}!={&{{map[rabbitmq-erlang-cookie:{dUJUNWpodFRJN1lPSktWQnRJYjMxN2pEaUhxcnUyVU0=} rabbitmq-password:{Z1V2VFVEMUlkUDJzTmQtLUpYNFNTTFlk}]}}}] kubernetes:helm.sh/v3:Chart$kubernetes:core/v1:Service (development/redis-headless) error: 2 errors occurred: * the Kubernetes API server reported that "development/redis-headless" failed to fully initialize or become live: 'redis-headless' timed out waiting to be Ready * Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods kubernetes:helm.sh/v3:Chart$kubernetes:core/v1:Service (development/redis-master) error: 2 errors occurred: * the Kubernetes API server reported that "development/redis-master" failed to fully initialize or become live: 'redis-master' timed out waiting to be Ready * Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods
Could it be that the one run broke something more crucial and now we need to do something manually to get it up and running again?
e

echoing-dinner-19531

11/23/2022, 4:26 PM
I'd try running a
pulumi refresh
That might pick up anything that's inconsistent from the first failed run
i

important-holiday-25047

11/23/2022, 5:09 PM
Sadly a pulumi refresh does not solve the problem
e

echoing-dinner-19531

11/23/2022, 5:11 PM
Wait is 3.46.1 still giving "violates plan" errors?
i

important-holiday-25047

11/23/2022, 7:12 PM
It does, yes
e

echoing-dinner-19531

11/23/2022, 7:34 PM
Hmm keep downgrading then, I thought this was introduced with 3.47 but might have to go back to 3.45 (hopefully not any earlier). It should be fixed soon, but we seem to have been doubly cursed with everyone going off for thanksgiving and the CI system failing at the same time.
i

important-holiday-25047

11/23/2022, 8:08 PM
I'll test it tomorrow and get back to you if I find a version that works. Thanks
q

quaint-match-50796

11/23/2022, 8:25 PM
I downgraded from 3.47.2 to 3.47.1, and now the Kubernetes provider is working properly. We had issues with Releases and Namespaces.
i

important-holiday-25047

11/24/2022, 10:44 AM
Sadly we still have the problem witrh 3.47.1, also with all version down to 3.45.0. It still says it violates the plan on the rabbit helm chart. On the redis Helm chart it says: Finding Pods to direct traffic to. I did a pulumi refresh with each version before I tried to deploy again.
@quaint-match-50796 Did you do a pulumi refresh between your tries?
When I try it locally it does delete the old rabbitmq and creates a pod, redis still tries to find a pod to direct traffic to. The rabbit pod is not functional though
RabbitMQ on node rabbit@rabbitmq-0.rabbitmq-headless.development.svc.cluster.local is not running or has not fully booted yet (check with is_booting)
e

echoing-dinner-19531

11/24/2022, 11:01 AM
You don't have PULUMI_EXPERIMENTAL set to true do you? That might be turning planning on even for old version, but then I'd expect you to have seen this before 3.47
q

quaint-match-50796

11/24/2022, 11:08 AM
No, I didn't. When we noticed the error, we tried on another stack. When the error came again, we decided to move back and avoid refresh changing anything.
My experimental is off. And it tried the new way.
i

important-holiday-25047

11/24/2022, 11:16 AM
No, experimental is off here as well
I fear that the pulumi refresh did create a somewhat troubling state. 😕
Can we clean the stack somehow from those two entries that are affected?
q

quaint-match-50796

11/24/2022, 11:25 AM
pulumi state delete urn
But look whole hierarchy, just to be sure your systems won't get in trouble.
i

important-holiday-25047

11/24/2022, 11:25 AM
But, does it then not also try to recreate all blob storages and other things that already exist?
q

quaint-match-50796

11/24/2022, 11:26 AM
Maybe you delete the state and manually import the resource?
i

important-holiday-25047

11/24/2022, 11:26 AM
We create a bunch of things with pulumi, blob storages, docker container deployments, helm charts....
q

quaint-match-50796

11/24/2022, 11:27 AM
i

important-holiday-25047

11/24/2022, 11:27 AM
Oh, the "urn" tell it the specific resource? I overlooked that parameter
q

quaint-match-50796

11/24/2022, 11:27 AM
Yes, urn is the specific urn from the resource
i

important-holiday-25047

11/24/2022, 1:22 PM
I deleted the urn's now, but I do have a problem importing the resources again. The output is a bit misleading:
kubernetes:<http://helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet|helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet> (development/rabbitmq)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet|helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet> (development/redis-master)
    error: resource 'development/redis' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap|helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap> (development/redis-configuration)
    error: resource 'development/redis' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap|helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap> (development/redis-health)
    error: resource 'development/redis' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap|helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap> (development/redis-scripts)
    error: resource 'development/redis' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret|helm.sh/v3:Chart$kubernetes:core/v1:Secret> (development/rabbitmq)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret|helm.sh/v3:Chart$kubernetes:core/v1:Secret> (development/rabbitmq-config)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret|helm.sh/v3:Chart$kubernetes:core/v1:Secret> (development/redis)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Service|helm.sh/v3:Chart$kubernetes:core/v1:Service> (development/rabbitmq)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Service|helm.sh/v3:Chart$kubernetes:core/v1:Service> (development/rabbitmq-headless)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Service|helm.sh/v3:Chart$kubernetes:core/v1:Service> (development/redis-headless)
    error: resource 'development/redis' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Service|helm.sh/v3:Chart$kubernetes:core/v1:Service> (development/redis-master)
    error: resource 'development/redis' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ServiceAccount|helm.sh/v3:Chart$kubernetes:core/v1:ServiceAccount> (development/rabbitmq)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ServiceAccount|helm.sh/v3:Chart$kubernetes:core/v1:ServiceAccount> (development/redis)
    error: inputs to import do not match the existing resource
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:Role|helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:Role> (development/rabbitmq-endpoint-reader)
    error: resource 'development/rabbitmq' does not exist
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:RoleBinding|helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:RoleBinding> (development/rabbitmq-endpoint-reader)
    error: resource 'development/rabbitmq' does not exist
So either it does not exist, or some value seems to differ, but sadly I have no idea what would differ as I solely added the import string and changed nothing else
q

quaint-match-50796

11/24/2022, 1:23 PM
These are the resources created by the Release.
The Helm chart was still deployed ?
i

important-holiday-25047

11/24/2022, 1:24 PM
I still have it in the kubernetes cluster
q

quaint-match-50796

11/24/2022, 1:24 PM
You will have to generate the whole chain of imports. At least, I don't know any automatic way of doing it.
i

important-holiday-25047

11/24/2022, 1:24 PM
Under services I still see the redis + redis headless, rabbitmq + rabbitmq-headless
What do you mean by generating the whole chain?
q

quaint-match-50796

11/24/2022, 1:26 PM
Release generates child urns from the main one. I think you should import all the child resources. I'm not sure that child resources get imported automatically when using Releases.
It's an annoying task importing manually.
If the RabbitMQ doesn't have any stateful data in the moment, I would just take it down and recreate it. Only the RabbitMQ release. It would be a lot faster.
i

important-holiday-25047

11/24/2022, 1:32 PM
So you mean just deleting the rabbit + redis in azure in kubernetes under services?
q

quaint-match-50796

11/24/2022, 1:33 PM
Using helm.
Uninstall the release.
i

important-holiday-25047

11/24/2022, 2:06 PM
I honestly have no idea how to do that
q

quaint-match-50796

11/24/2022, 2:06 PM
i

important-holiday-25047

11/24/2022, 2:07 PM
I logged into our registry with helm, but list does not give me anything
Sadly I am not the guy who normally works on the release part, that one is ill and we don't know when he is back
q

quaint-match-50796

11/24/2022, 2:07 PM
helm list
i

important-holiday-25047

11/24/2022, 2:07 PM
I have no output there
helm list -a
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION
Thats it
q

quaint-match-50796

11/24/2022, 2:08 PM
Is your kubernetes context active ?
i

important-holiday-25047

11/24/2022, 2:08 PM
With kubectl I normally can see everything.
No pods are available for the development namespace though
q

quaint-match-50796

11/24/2022, 2:09 PM
May you try to get all elements from development namespace ?
i

important-holiday-25047

11/24/2022, 2:10 PM
I did and I found the entries for redis/rabbit:
statefulset.apps/rabbitmq       0/0     282d
statefulset.apps/redis-master   0/0     271d

service/rabbitmq            LoadBalancer   IP   IP   5672:31923/TCP,4369:32260/TCP,25672:32374/TCP,15672:30543/TCP   282d
service/rabbitmq-headless   ClusterIP      None           <none>        4369/TCP,5672/TCP,25672/TCP,15672/TCP                           282d
service/redis-headless      ClusterIP      None           <none>        6379/TCP                                                        282d
service/redis-master        LoadBalancer   IP   IP    6379:30840/TCP                                                  282d
q

quaint-match-50796

11/24/2022, 2:10 PM
Delete the services and the statefulset.
It appears that there is a deployment error, since release is not there. But part of the elements are.
But Helm Release should at least be listed. As Failed or Pending-Upgrade. That's a bit weird.
i

important-holiday-25047

11/24/2022, 2:52 PM
It still does not work. I deleted the services and the statefulsets, I did a pulumi refresh afterwards, but I still get the "rabbitmq violates plan" message and redis cannot deploy as it still finds some things...
kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap|helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap> (development/redis-configuration)
    error: resource development/redis-configuration was not successfully created by the Kubernetes API server : configmaps "redis-configuration" already exists
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap|helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap> (development/redis-health)
    error: resource development/redis-health was not successfully created by the Kubernetes API server : configmaps "redis-health" already exists
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap|helm.sh/v3:Chart$kubernetes:core/v1:ConfigMap> (development/redis-scripts)
    error: resource development/redis-scripts was not successfully created by the Kubernetes API server : configmaps "redis-scripts" already exists
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret|helm.sh/v3:Chart$kubernetes:core/v1:Secret> (development/rabbitmq)
    error: resource urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret::development/rabbitmq|helm.sh/v3:Chart$kubernetes:core/v1:Secret::development/rabbitmq> violates plan: properties changed: ++data[{&{{map[rabbitmq-erlang-cookie:{Ymc4RmFqcncySHNrdElqb0dPSk5iVEtoemFGM2dyU2s=} rabbitmq-password:{Z1V2VFVEMUlkUDJzTmQtLUpYNFNTTFlk}]}}}!={&{{map[rabbitmq-erlang-cookie:{d3NtZ05sekZLQUpOam1PZkdZZjVUQVRhVXhWNjc2OEs=} rabbitmq-password:{Z1V2VFVEMUlkUDJzTmQtLUpYNFNTTFlk}]}}}]
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret|helm.sh/v3:Chart$kubernetes:core/v1:Secret> (development/rabbitmq-config)
    error: resource development/rabbitmq-config was not successfully created by the Kubernetes API server : secrets "rabbitmq-config" already exists
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Secret|helm.sh/v3:Chart$kubernetes:core/v1:Secret> (development/redis)
    error: resource development/redis was not successfully created by the Kubernetes API server : secrets "redis" already exists
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:ServiceAccount|helm.sh/v3:Chart$kubernetes:core/v1:ServiceAccount> (development/redis)
    error: resource development/redis was not successfully created by the Kubernetes API server : serviceaccounts "redis" already exists
 
  kubernetes:<http://helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:RoleBinding|helm.sh/v3:Chart$kubernetes:rbac.authorization.k8s.io/v1:RoleBinding> (development/rabbitmq-endpoint-reader)
    error: resource development/rabbitmq-endpoint-reader was not successfully created by the Kubernetes API server : <http://rolebindings.rbac.authorization.k8s.io|rolebindings.rbac.authorization.k8s.io> "rabbitmq-endpoint-reader" already exists
No idea where they should still exist though
q

quaint-match-50796

11/24/2022, 2:53 PM
pulumi stack output --show-urn
kubectl get sa redis -n "namespace"
i

important-holiday-25047

11/24/2022, 2:55 PM
kubectl still has the redis: redis 1 282d
q

quaint-match-50796

11/24/2022, 2:57 PM
I can't understand why Helm Release donsn't show.
kubectl config get-contexts
i

important-holiday-25047

11/24/2022, 2:57 PM
Everything related to redis or rabbitmq from the pulumi stack:
pulumi:pulumi:Stack                                   Cloud-Development
    │  URN: urn:pulumi:Development::Cloud::pulumi:pulumi:Stack::Cloud-Development
    ├─ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart>                        rabbitmq
    │     URN: urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart::rabbitmq|helm.sh/v3:Chart::rabbitmq>
    ├─ kubernetes:<http://helm.sh/v3:Chart|helm.sh/v3:Chart>                        redis
    │  │  URN: urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart::redis|helm.sh/v3:Chart::redis>
    │  ├─ kubernetes:core/v1:Service                      development/redis-headless
    │  │     URN: urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Service::development/redis-headless|helm.sh/v3:Chart$kubernetes:core/v1:Service::development/redis-headless>
    │  ├─ kubernetes:core/v1:Service                      development/redis-master
    │  │     URN: urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart$kubernetes:core/v1:Service::development/redis-master|helm.sh/v3:Chart$kubernetes:core/v1:Service::development/redis-master>
    │  └─ kubernetes:apps/v1:StatefulSet                  development/redis-master
    │        URN: urn:pulumi:Development::Cloud::kubernetes:<http://helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet::development/redis-master|helm.sh/v3:Chart$kubernetes:apps/v1:StatefulSet::development/redis-master>
    │     URN: urn:pulumi:Development::Cloud::kubernetes:core/v1:Namespace::development
    ├─ random:index/randomPassword:RandomPassword         redis-password
    │     URN: urn:pulumi:Development::Cloud::random:index/randomPassword:RandomPassword::redis-password
    ├─ random:index/randomPassword:RandomPassword         rabbitmq-password
    │     URN: urn:pulumi:Development::Cloud::random:index/randomPassword:RandomPassword::rabbitmq-password
    ├─ kubernetes:core/v1:Secret                          scaling-rabbitmq-secrets
    │     URN: urn:pulumi:Development::Cloud::kubernetes:core/v1:Secret::scaling-rabbitmq-secrets
    ├─ kubernetes:<http://keda.sh/v1alpha1:TriggerAuthentication|keda.sh/v1alpha1:TriggerAuthentication>  scaling-rabbitmq-auth
    │     URN: urn:pulumi:Development::Cloud::kubernetes:<http://keda.sh/v1alpha1:TriggerAuthentication::scaling-rabbitmq-auth|keda.sh/v1alpha1:TriggerAuthentication::scaling-rabbitmq-auth>
The context is correct
q

quaint-match-50796

11/24/2022, 3:01 PM
Export your state (just to be sure you have a backup)
pulumi state delete urn:pulumi:Development::cloud::kubernetes:<http://helm.sh/v3:chart::rabbitmq|helm.sh/v3:chart::rabbitmq> --target-dependents
After this, remove the components which are yet at cluster (related to failed Release install) Then run a pulumi up again.
i

important-holiday-25047

11/24/2022, 5:58 PM
Its working again now, thanks a lot
q

quaint-match-50796

11/24/2022, 6:01 PM
That's great.
When you face any Pulumi issues regarding plans, which you find weird. Don't refresh before overlooking everything