these are examples of helm chart install failures ...
# kubernetes
s
these are examples of helm chart install failures I run into every day. Using v3.Chart does not seem to make a difference in terms of reliability:
Copy code
+   ├─ kubernetes:helm.sh/v3:Release  elastic-operator           **creating failed**     error: rendered manifests contain a resource that already exists. Unable
 ~   ├─ kubernetes:helm.sh/v3:Release  redis                      **updating failed**     [diff: ~checksum,values]; error: another operation (install/upgrade/rollback)
 +   └─ kubernetes:helm.sh/v3:Release  datadog                    **creating failed**     error: release datadog-b3700a2f failed, and has been uninstalled due to a

  kubernetes:helm.sh/v3:Release (elastic-operator):
    error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "beats.beat.k8s.elastic.co" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "elastic-operator-1ae1300c": current value is "elastic-operator-crd-6e702a32"

  pulumi:pulumi:Stack (dev-eks-dev-charts-system):
    error: update failed

  kubernetes:helm.sh/v3:Release (kafka):
    error: another operation (install/upgrade/rollback) is in progress

  kubernetes:helm.sh/v3:Release (elastic-operator-crd):
    error: 1 error occurred:
    	* Helm release "elastic-system/elastic-operator-crd-6e702a32" failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release elastic-system/elastic-operator-crd-6e702a32: an error occurred while rolling back the release. original upgrade error: failed to create resource: the server could not find the requested resource: failed to create resource: the server could not find the requested resource
after using heml to uninstall these, and pulumi refresh, I still get all this broken artifacts preventing installation of the charts with Release at all. @stocky-restaurant-98004
Copy code
Diagnostics:
  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (kafka):
    error: another operation (install/upgrade/rollback) is in progress

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (kube-prometheus-stack):
    error: failed to install CRD crds/crd-probes.yaml: 1 error occurred:
    	* the server could not find the requested resource

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (elastic-operator):
    error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "elastic-operator" in namespace "elastic-system" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>" must equal "elastic-operator-23c1a9d9": current value is "elastic-operator-c64272a6"

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (elastic-operator-crd):
    error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "<http://agents.agent.k8s.elastic.co|agents.agent.k8s.elastic.co>" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>" must equal "elastic-operator-crd-7ef6c675": current value is "elastic-operator-c64272a6"

  pulumi:pulumi:Stack (dev-eks-dev-charts-system):
    error: update failed

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (csi-secrets-store):
    error: release csi-secrets-store-78331352 failed, and has been uninstalled due to atomic being set: failed pre-install: warning: Hook pre-install secrets-store-csi-driver/templates/crds-upgrade-hook.yaml failed: 1 error occurred:
    	* the server could not find the requested resource
installs with this method just seems completely broken
Copy code
+   ├─ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>  datadog                    **creating failed**     error: rendered manifests contain a resource that already exists. Unable
 ~   ├─ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>  redis                      **updating failed**     [diff: ~checksum,values]; error: another operation (install/upgrade/rollb
 +   ├─ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>  elastic-operator-crd       **creating failed**     error: rendered manifests contain a resource that already exists. Unable
 +   └─ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>  elastic-operator           **creating failed**     error: rendered manifests contain a resource that already exists. Unable
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (redis):
    error: another operation (install/upgrade/rollback) is in progress

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (kube-prometheus-stack):
    error: release kube-prometheus-stack-67fd455a failed, and has been uninstalled due to atomic being set: 10 errors occurred:
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource
    	* the server could not find the requested resource

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (elastic-operator):
    error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "elastic-operator" in namespace "elastic-system" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>" must equal "elastic-operator-1b84f841": current value is "elastic-operator-c64272a6"

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (elastic-operator-crd):
    error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "<http://agents.agent.k8s.elastic.co|agents.agent.k8s.elastic.co>" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>" must equal "elastic-operator-crd-db4bdcc8": current value is "elastic-operator-c64272a6"

  pulumi:pulumi:Stack (dev-eks-dev-charts-system):
    error: update failed

  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (datadog):
    error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "<http://datadogagents.datadoghq.com|datadogagents.datadoghq.com>" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "<http://meta.helm.sh/release-name|meta.helm.sh/release-name>" must equal "datadog-db7aca14": current value is "datadog-b3700a2f"
s
Do any of them install 1 by 1?
s
I can try uninstalling everything and trying one by one. My latest Release try looks like this:
Copy code
export const elasticRelease = new helm.v3.Release(
  'elastic-operator',
  {
    namespace: 'elastic-system',
    chart: 'eck-operator',
    repositoryOpts: { repo: '<https://helm.elastic.co>' },
    version: '2.10.0',
    atomic: true,
    cleanupOnFail: true,
    disableOpenapiValidation: false,
    valueYamlFiles: [new FileAsset('charts/eck-operator/values.yaml')],
  },
  { provider: k8sProvider },
)
s
One way to debug is that you can configure the k8s provider to write out YAML files instead, so that might be helpful in tracking down the issue: https://www.pulumi.com/registry/packages/kubernetes/api-docs/provider/#renderyamltodirectory_go
So what happens if you
kubectl apply
the output of just 1 chart?
s
thanks yeah I have become convinced to use v3.Release and to install all the charts locally for trying this.
s
k9scli.io will make it easier to figure out what's wrong.
Try Chart vs Release (whichever one you have not tried)
s
I've used k9s since eks 1.18.
okay I did get 1 chart installed successfully, trying the next one.
omg the 2nd installed. 🤯
strangely deleting the crd's does not help. A solution I am going to try after continuing with the next ones is installing it in its own stack.
cert-manager does not install under any circumstance. I'm going to try this with Chart next.
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (cert-manager):
    warning: Helm release "cert-manager-699df6fb" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: failed post-install: 1 error occurred:
    	* timed out waiting for the condition
    error: 1 error occurred:
    	* Helm release "cert-manager/cert-manager-699df6fb" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release cert-manager/cert-manager-699df6fb: failed post-install: 1 error occurred:
    	* timed out waiting for the condition
and using v3.Chart just works. 😕 🤷
I've made progress on these one by one and I think the last issue is related to crds I think I can solve by putting those into its own stack.