This message was deleted.
# kubernetes
s
This message was deleted.
s
Are you using hard-coded names, or allowing Pulumi to auto-generate names for resources?
s
one sec for an example I am making for another ticket
Copy code
import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'

const _websocketNamespace = new k8s.core.v1.Namespace(
  'websocket',
  { metadata: { name: 'websocket' } },
  { provider: k8sProvider },
)

export const websocketRelease = new k8s.helm.v3.Release(
  'websocket',
  {
    chart: './charts/websocket',
    name: 'websocket',
    namespace: _websocketNamespace.metadata.name,
  },
  { provider: k8sProvider, dependsOn: _websocketNamespace },
)
maybe I shouldn't use k8s? It's like I am running up on a brick wall. I had complete success with my prior version of pulumi on this but some charts had different errors.
s
The ticket says “chart installs that are already installed in the cluster fail to install with a conflicting name.” Is that accurate? How did the charts get installed if not by Pulumi? Maybe I need more coffee. 🙂
s
it is accurate, 😹 I have another crazy error now one sec.
this stack should create about 100 resources and is now doing anything.
Copy code
❯ pu   
Previewing update (eks-dev-charts-app)

View in Browser (Ctrl+O): <https://app.pulumi.com/openphone/infra/eks-dev-charts-app/previews/2e0e6079-32c4-434f-a2c1-03019bfa8e6d>

     Type                 Name                      Plan     
     pulumi:pulumi:Stack  infra-eks-dev-charts-app           

Resources:
    1 unchanged
I resolved the missing preview issue, trying up again.
s
👍🏻
s
okay down to just one helm chart having an issue. sorry I am panicky about this I am on a deadline and I have been blocked by bugs in this pulumi class for 3 months. sad panda
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (command):
    warning: Helm release "command" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: 1 error occurred:
    	* deployments.apps "command-worker" already exists
    error: 1 error occurred:
    	* Helm release "command/command" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release command/command: 1 error occurred:
    	* deployments.apps "command-worker" already exists
this deployment should not have existed, everything was deleted
s
OK, try a
pulumi refresh
first, to reconcile the stack state with what is actually present/not present, then try
pulumi up
and see if you get the same error. I’m assuming you’ve verified (using
kubectl
or whatever tool you prefer) that the Deployment is indeed gone?
s
if anyone has a better pattern for using k8s.helm.v3.Release I am all ears I am desperate at this point.
lol.
Copy code
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (admin):
    error: cannot re-use a name that is still in use
s
How did you remove the previous installation? Using the
helm
CLI, using
pulumi destroy
, or by deleting the K8s resources directly? Does the
helm
CLI still show the release as being present?
s
I had not removed it, I run pulumi up again. for most charts it works correctly but these edge cases consistently wont work. I am expecting it to work without a destroy or manual intervention for ci/cd.
helm ls does not show the release to be present but it is installed in the cluster.
a new issue as well is that my code to create a secret in the namespace before the release is installed it is not working consistently anymore.
for example
Copy code
function createExtCredSecret(
  namespace: pulumi.Output<string>,
  provider: k8s.Provider,
  dependsOn: pulumi.Input<pulumi.Resource>,
): pulumi.Output<k8s.yaml.ConfigFile> {
  return namespace.apply(
    (ns) =>
      new k8s.yaml.ConfigFile(
        `extcred-${ns}`,
        {
          file: 'extcred.yaml',
          transformations: [
            (obj: any) => {
              if (obj.kind === 'Secret') {
                obj.metadata.namespace = ns
              }
            },
          ],
        },
      ),
  )
}

const _accountNamespace = new k8s.core.v1.Namespace(
  'account',
  { metadata: { name: 'account' } },
  { provider: k8sProvider },
)

const _extcredSecretAccount = createExtCredSecret(_accountNamespace.metadata.name, k8sProvider, _accountNamespace)
namespace is present but secret is not
s
These are some very odd behaviors! I’m sorry that you’re running into this. Let me escalate internally with the Kubernetes provider team and see if we can help figure out what’s happening here.
🙏 1
s
multiple runs are deploying the release multple times now. I'm losing my mind over this.
I am having to give up on using pulumi for helm chart installs just to move forward but since it is funny I thought I would share the latest behavior: commenting out all the releases in my Stack and Pulumi is trying to install them again. 🤷‍♂️
s
So strange! Sorry we haven't been able to help get this resolved. (I did ping the internal Kubernetes provider team in the hopes that one of them would be able to help.)
s
I've been blocked by this for 3 months and it has gotten me close to tears on multiple occasions. I'm just thrilled I am not fired because of this. If I was my manager I would have fired me.
s
sad panda I'm so sorry to hear this! The behavior you're describing definitely seems very unusual and out of place. I'll bring this up again with the provider team, but I understand you need to move on so you can hit your deadlines.
s
fwiw just helm.Release I can't keep doing the same thing and expect a different result. I am going to be using helm and bash which is lame.
q
Hey @stale-answer-34162 Apologies that you're having difficulties managing Helm releases with pulumi. On the surface, it looks like something might be up with helm/state within your k8s cluster. Perhaps, not everything was cleaned up the last time you uninstalled the helm chart. Could you check to see if there are any helm secrets in the
kube-system
namespace? Example kubectl command:
Copy code
kubectl -n kube-system get secrets | grep helm
This might be why
helm ls
might not be showing a release, despite the error surfaced from helm CLI to Pulumi indicating otherwise.
s
I will try that on my next deployment in a couple/few hours. I have noticed a lot of unusual cleanup issues too.
👍 1
q
Just to clarify some points for my understanding, I assume that you're only using
helm.v3.Release
resources, and not
helm.v3.Chart
? Are you also manipulating the helm installations externally/manually with the helm CLI? And can you clarify how you're uninstalling helm releases from the cluster as well. Thanks!
s
correct only
helm.v3.Release
, I had a really poor success rate with
helm.v3.Chart
due to many charts requiring hooks and much better success with Release. I was not typically manipulating helm installations with helm unless testing broken installs. My intention was to use
helm.v3.Release
to overwrite existing releases when helm chart changes were detected and I have no other uninstall logic in my pulumi codebase. When or if an overwrite would not work on a complex chart I would comment out the release code block and run
pulumi up
and then uncomment it and run pulumi up again.
this procedure worked well in 4.5.0 but now basically does not work in 4.7.0.
a typical Release block looks like this for me
Copy code
export const _elasticSystemNamespace = new k8s.core.v1.Namespace(
  'elastic-system',
  { metadata: { name: 'elastic-system' } },
  { provider: k8sProvider },
)

const _elasticValues = new pulumi.asset.FileAsset('charts/eck-operator/values.yaml')

const _elasticRelease = new helm.v3.Release(
  'elastic-operator',
  {
    name: 'elastic-operator',
    namespace: _elasticSystemNamespace.metadata.name,
    chart: 'eck-operator',
    repositoryOpts: { repo: '<https://helm.elastic.co>' },
    valueYamlFiles: [_elasticValues],
    version: '2.10.0',
  },
  { provider: k8sProvider, dependsOn: [_elasticSystemNamespace] },
)
and a local helm chart via:
Copy code
const _notificationNamespace = new k8s.core.v1.Namespace(
  'notification',
  { metadata: { name: 'notification' } },
  { provider: k8sProvider },
)

const _dockerSecretNotification = createDockerRegistrySecret(
  "ghcr-k8s-notification", // Pulumi resource name
  "ghcr-k8s", // Kubernetes secret name
  _notificationNamespace.metadata.name,
  { provider: k8sProvider }
);

const _extcredSecretNotification = createExtCredSecret(_notificationNamespace.metadata.name, k8sProvider, _notificationNamespace)

export const notificationRelease = new k8s.helm.v3.Release(
  'notification',
  {
    chart: './charts/notification',
    name: 'notification',
    namespace: _notificationNamespace.metadata.name,
    values: {
      image: {
        pullSecrets: [{ name: _dockerSecretNotification }],
      },
    },
  },
  { provider: k8sProvider, dependsOn: [_notificationNamespace, _dockerSecretNotification, _extcredSecretNotification] },
)
this last one will now instead of overwriting the release will make a new release every
pulumi up
q
Thanks for the additional info! I'll investigate to see what's happening here.
w
https://github.com/martinjt/aks-otel-demo/blob/main/infra/Applications/OtelDemo.cs If you want a complete example that's how I do it and works fine with upgrades etc. I have found that sometimes if the helm fails to deploy for a reason outside of pulumi, then there's an issue cleaning up and pulumi doesn't realise that the chart is in a pending state and can't clean it up. I'd suggest getting helm installed locally, and when you hit the problem in pulumi, try doing a
helm release list
to see if the one that pulumi deployed actually errored. Helm is an interesting beast and can error in weird and wonderful ways that are not related to pulumi.
s
I figured out this issue was indeed related to helm install versus helm upgrade. Thanks for everyones look at this. Maybe there are better ways to communicate backward compatible breaking changes in releases.