thankful-fall-44419
02/09/2024, 3:50 PMthankful-fall-44419
02/09/2024, 3:51 PMthankful-fall-44419
02/09/2024, 3:51 PMerror: rpc error: code = Unknown desc = failed to parse kubeconfig: yaml: line 4: mapping values are not allowed in this context
thankful-fall-44419
02/09/2024, 3:52 PMorange-arm-29738
02/11/2024, 9:59 PMgetIp
for an available worker, but I can't figure how to integrate that logic with Kubernetes. I could try to use health checks to hide the busy pods but would like a cleaner solution. Any ideas?hundreds-gold-35182
02/12/2024, 5:09 PMpulumi stack history
-- anyone have experience enabling this sort of thing?stale-answer-34162
02/13/2024, 3:36 PMkubernetes:core/v1:Namespace (ingress-nginx):
error: Timeout occurred polling for 'ingress'
sparse-intern-71089
02/21/2024, 7:26 PMgreat-pencil-25669
02/22/2024, 2:11 PMprehistoric-fish-76119
02/22/2024, 2:49 PMkubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to create chart from template: execution error at (traefik/templates/servicemonitor.yaml:5:10): ERROR: You have to deploy <http://monitoring.coreos.com/v1|monitoring.coreos.com/v1> first
numerous-energy-27817
02/23/2024, 10:08 AMbig-potato-91793
02/23/2024, 2:52 PM[id=something]
[urn=urn:pulumi:nonprod9::apollo-client-deployment::something]
~ spec: {
~ source: "TyfwVfQp.mjs" => "vF9be9rS.mjs"
}
error: Preview failed: 1 error occurred:
* the Kubernetes API server reported that "something" failed to fully initialize or become live: Server-Side Apply field conflict detected. see <https://www.pulumi.com/registry/packages/kubernetes/how-to-guides/managing-resources-with-server-side-apply/#handle-field-conflicts-on-existing-resources> for troubleshooting help
: Apply failed with 1 conflict: conflict with "node-fetch" using <http://component.tm1.tktm.io/v1beta1|component.tm1.tktm.io/v1beta1>: .spec.source
error: preview failed
Someone did modify the object with openlens in and now pulumi cannot do anything about it.
Any idea what we are doing wrong?
Thatβs kind of normal step for us.gorgeous-minister-41131
02/26/2024, 7:51 PMerror: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> resource 'rabbitmq': property chart value {rabbitmq} has a problem: looks like "<oci://registry-1.docker.io/bitnamicharts/rabbitmq>" is not a valid chart repository or cannot be reached: object required; check the chart name and repository configuration.
kind-fireman-33438
02/27/2024, 6:50 PMfuture-kite-91191
02/29/2024, 7:40 AMnumerous-fall-58752
02/29/2024, 1:46 PMpulumi up
with PULUMI_K8S_DELETE_UNREACHABLE="true"
but I'm getting
error: can't delete Helm Release with unreachable cluster. Reason: "unable to load schema information from the API server: the server could not find the requested resource"
I thought PULUMI_K8S_DELETE_UNREACHABLE was exactly for this kind of thing?delightful-flag-25944
03/01/2024, 3:40 PMconst eksCluster = eks.Cluster.get("eksCluster", clusterName);
but I am getting Property get does not exist on type typeof Cluster
. This is my import import * as eks from "@Pulumi Team/eks";
narrow-lamp-48734
03/05/2024, 7:00 AMpulumi up
? They occur sporadically, approximately in 1 out of every 3 attempts. Given that I'm managing several charts in my stack, it's unlikely that I can successfully run pulumi up
on my first try. I often need to attempt it multiple times before all resources are updated successfully.
Diagnostics:
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (ingress-nginx-helm):
error: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> resource 'ingress-nginx-helm': property chart value {ingress-nginx} has a problem: Get "<https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.9.1/ingress-nginx-4.9.1.tgz>": EOF; check the chart name and repository configuration.
Thank you in advance!full-spoon-20578
03/05/2024, 9:18 PMpulumi
binaries directly sweaty blob Before doing so, I am hoping to raise the question in hope that someone already stumbled on something similar π
I have a kubernetes cluster described using pulumi. Kubernetes cluster managed by the DigitalOcean. I am using a couple of public Helm charts to deploy some custom resources, other than that the setup is pretty boring and standard: several deployments and services.
Everything works OK for some time, but after a few days (roughly 7 days), I am starting seeing the following error when performing the pulumi preview
and or `pulumi up`:
error: an unhandled error occurred: program failed:
waiting for RPCs: failed to invoke helm template: rpc error: code = Unknown desc = invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: could not get server version from Kubernetes: the server has asked for the client to provide credentials
I have tried reload the kubeconfig, kubectl
works just fine, but pulumi
is failing.
Does anybody have an idea what might go wrong? Or where to look into in order to approach this?
Any help would be much appreciated πcalm-van-29132
03/06/2024, 7:30 PMstocky-refrigerator-75813
03/12/2024, 1:20 PMSecret
value from a cluster. I've got the following code:
cert_ca = k8s.core.v1.Secret.get("ou-tls-certificate","openunison/ou-tls-certificate").data["ca.crt"].apply(lambda data: str(data) )
stocky-refrigerator-75813
03/12/2024, 1:22 PMcert_ca
is always Output<T>
and not a string. what am I missing? if i log the value or data
in the lambda, i get the value i was expectingflat-afternoon-93346
03/13/2024, 5:46 PMException: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: chart "logdna/agent" version "latest" not found in <https://assets.logdna.com/charts> repository
This is my code, strange things this error happens for all the charts, do need to pass anything additional to deploy?
import pulumi
from kube import kube_provider
from pulumi_kubernetes import helm
# Define your Pulumi stack name
name = pulumi.get_stack()
# Define your Kubernetes provider targeting EKS
kube_provider = kube_provider
# Replace "YOUR_INGESTION_KEY_HERE" with your actual LogDNA ingestion key
logdna = helm.v3.Chart(
f"{name}-logdna",
helm.v3.ChartOpts(
chart="logdna/agent",
namespace="kube-system",
version="latest",
fetch_opts=helm.v3.FetchOpts(
repo="<https://assets.logdna.com/charts>" # Use correct protocol specifier
),
values={"ingestionKey": "YOUR_INGESTION_KEY_HERE"},
),
opts=pulumi.ResourceOptions(provider=kube_provider),
)
# Output the release name and namespace for reference
pulumi.export("logdna_release_name", logdna.release_name)
brainy-accountant-88780
03/14/2024, 1:01 PMbrainy-accountant-88780
03/15/2024, 10:43 AMeager-wall-56838
03/17/2024, 3:21 PMexport const cert = new k8s.apiextensions.CustomResource(
"cert",
{
apiVersion: "<http://cert-manager.io/v1|cert-manager.io/v1>",
kind: "Certificate",
spec: {
secretName: "foo-bar",
dnsNames: ["example.default.svc.cluster.local"],
issuerRef: { .. },
},
}
)
I need to get secretName
to reference in a subsequent deployment. My workaround is to make const certSecretName = "foo-bar"
then reference it in both places and add a dependsOn, but that's not great.
Thanks!icy-controller-6092
03/17/2024, 10:49 PMlively-dinner-8285
03/18/2024, 7:17 AMRelease
After running pulumi up
it says the deployment was successful and the status is deployed, however, there are no workload showing up in GKE. are there any specific values i need to set to get this going??lively-dinner-8285
03/18/2024, 7:17 AMimport pulumi
from pulumi_kubernetes.helm.v3 import Release, ReleaseArgs, RepositoryOptsArgs
from typing import Optional, Dict
import pulumi_kubernetes as k8s
from pulumi import ResourceOptions
class HelmCharts:
def __init__(
self, release_name: str,
chart_name: str,
repo: Optional[str],
version: Optional[str],
namespace: Optional[str],
cluster_name: str,
):
self.release_name = release_name
self.chart_name = chart_name
self.version = version
self.repo = repo
self.namespace = namespace
self.cluster_name = cluster_name
def deploy(self, values: Optional[Dict] = None):
k8s_provider = k8s.Provider(self.cluster_name)
chart = Release(
self.release_name,
ReleaseArgs(
chart=self.chart_name,
version=self.version,
namespace=self.namespace,
verify=False,
create_namespace=True,
cleanup_on_fail=True,
wait_for_jobs=True,
repository_opts=RepositoryOptsArgs(
repo=self.repo,
),
values=values # used to set chart values
),
opts=ResourceOptions(provider=k8s_provider),
)
return chart
@staticmethod
def export_chart_status(chart: Release):
return pulumi.export("chart status", chart.status)
lively-dinner-8285
03/18/2024, 7:37 PM