what is a good way to prevent this error using v3....
# kubernetes
s
what is a good way to prevent this error using v3.Release? This happens after every second execution of pulumi up:
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (mongodb):
    error: 1 error occurred:
    	* Helm release "mongodb/mongodb-456643ba" failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release mongodb/mongodb-456643ba: release mongodb-456643ba failed, and has been rolled back due to atomic being set: failed to create resource: the server could not find the requested resource
@stocky-restaurant-98004 I still don't understand how I am supposed to reliably install helm charts with pulumi and I am close to giving up. After a couple weeks of working on these errors I am going to be forced to give up soon.
Copy code
kubernetes:core/v1:ServiceAccount (ingress/ingress-nginx-admission):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at <https://github.com/pulumi/pulumi-kubernetes/issues/555> -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> (elastic-operator-edit):
    error: resource elastic-operator-edit was not successfully created by the Kubernetes API server : Server-Side Apply field conflict detected. see <https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/managing-resources-with-server-side-apply/#handle-field-conflicts-on-existing-resources> for troubleshooting help
    : Apply failed with 1 conflict: conflict with "pulumi-resource-kubernetes" using <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>: .<http://metadata.labels.app.kubernetes.io/instance|metadata.labels.app.kubernetes.io/instance>

  kubernetes:batch/v1:Job (ingress/ingress-nginx-admission-create):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at <https://github.com/pulumi/pulumi-kubernetes/issues/555> -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> (elastic-operator):
    error: resource elastic-operator was not successfully created by the Kubernetes API server : Server-Side Apply field conflict detected. see <https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/managing-resources-with-server-side-apply/#handle-field-conflicts-on-existing-resources> for troubleshooting help
    : Apply failed with 1 conflict: conflict with "pulumi-resource-kubernetes" using <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>: .<http://metadata.labels.app.kubernetes.io/instance|metadata.labels.app.kubernetes.io/instance>

  kubernetes:<http://networking.k8s.io/v1:NetworkPolicy|networking.k8s.io/v1:NetworkPolicy> (ingress/ingress-nginx-admission):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at <https://github.com/pulumi/pulumi-kubernetes/issues/555> -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRoleBinding|rbac.authorization.k8s.io/v1:ClusterRoleBinding> (ingress-nginx-admission):
    warning: This resource contains Helm hooks that are not currently supported by Pulumi. The resource will be created, but any hooks will not be executed. Hooks support is tracked at <https://github.com/pulumi/pulumi-kubernetes/issues/555> -- This warning can be disabled by setting the PULUMI_K8S_SUPPRESS_HELM_HOOK_WARNINGS environment variable

  kubernetes:<http://rbac.authorization.k8s.io/v1:ClusterRole|rbac.authorization.k8s.io/v1:ClusterRole> (elastic-operator-view):
    error: resource elastic-operator-view was not successfully created by the Kubernetes API server : Server-Side Apply field conflict detected. see <https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/managing-resources-with-server-side-apply/#handle-field-conflicts-on-existing-resources> for troubleshooting help
    : Apply failed with 1 conflict: conflict with "pulumi-resource-kubernetes" using <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>: .<http://metadata.labels.app.kubernetes.io/instance|metadata.labels.app.kubernetes.io/instance>
s
This is beyond my ability to debug, but I pinged the engineering team. In the meanwhile, do you have a minimal example that reproduces your bug? If so, it would be good to file a GH issue if you think there's something wrong with either the docs or a bug in the code.
s
Ty sir! Sure:
Copy code
const mongodbNamespace = new k8s.core.v1.Namespace(
  'mongodb',
  { metadata: { name: 'mongodb' } },
  { provider: k8sProvider },
)

export const mongodbRelease = new helm.v3.Release(
  'mongodb',
  {
    namespace: mongodbNamespace.metadata.name,
    chart: 'mongodb',
    repositoryOpts: { repo: '<https://charts.bitnami.com/bitnami>' },
    version: '14.2.6',
    atomic: true,
    cleanupOnFail: true,
    valueYamlFiles: [new FileAsset('charts/mongodb/values.yaml')],
  },
  { provider: k8sProvider, dependsOn: [mongodbNamespace] },
)
q
@stale-answer-34162 It looks like there is an issue with an object (defined within the helmchart), being created and in a healthy state on the live cluster within our timeouts. The kubernetes provider defers to the helm binary to actually create the resources on the cluster, which is why the error message indicates using the Helm CLI to debug further. You could disable the live check if if for some reason, it is taking longer than usual for your Kubernetes cluster to spin up these resources. You can set `
Copy code
skip_await: true
in the helm.v3.Release object. Or increase the `
Copy code
timeout
setting in that object.
Fwiw I was able to run your code snippet multiple times on a GKE cluster without issue. (Without a custom values file that is)
s
I'm going to try this in a few minutes
d
Is it possible that @stale-answer-34162 is hitting this issue? There does seem to be numerous issues at play. The "hook" and "conflict" warnings above seem to be related to attempts to use
Chart
rather than
Release
. Some of the objects are cluster-scoped (cluster roles, admission webhooks, etc), and it seems that some objects weren't cleaned up,
s
I have been having a real difficult time trying to determine when to use Chart or Release. I have tried both on multiple occasions without being able to install successfully or installing once is fine but subsequent
pulumi up
runs fail due to "failure to create existing objects" which stops the whole process. I have been trying this for weeks now I do not know how to install charts with hooks successfully, it seems like it just does not work for a lot of the most common helm charts out there.
d
I would advocate for using
Release
because it does offer higher compatibility, e.g. with hooks.
@stale-answer-34162 a potential fix (details) for the intermittent failure will be included in pulumi-kubernetes v4.6.0, which should be released within a week or less.
s
something that has helped my situation is using ConfigFile class for yaml installs. I didn't know this existed earlier and pulumi-ai finally led me in this direction for example:
Copy code
// Install Datadog CRDs
const datadogCrds = new k8s.helm.v3.Chart(
  'datadog-crds',
  {
    chart: 'datadog-crds',
    fetchOpts: { repo: '<https://helm.datadoghq.com>' },
    version: '1.2.0',
  },
  { provider: k8sProvider },
)

export const datadogRelease = new helm.v3.Release(
  'datadog',
  {
    namespace: datadogkNamespace.metadata.name,
    chart: 'datadog-operator',
    repositoryOpts: { repo: '<https://helm.datadoghq.com>' },
    version: '1.3.0',
  },
  { provider: k8sProvider, dependsOn: [datadogNamespace] },
)

// Install Datadog Agent
export const datadogAgent = new helm.v3.Release(
  'datadog-agent',
  {
    namespace: 'datadog',
    chart: 'datadog',
    repositoryOpts: { repo: '<https://helm.datadoghq.com>' },
    version: '3.49.3',
  },
  { provider: k8sProvider },
)

const datadogAgentResources = new k8s.yaml.ConfigFile(
  'datadog-agent-yaml',
  {
    file: 'charts/datadog-operator/datadogagent.yml',
  },
  { dependsOn: datadogAgent },
)