straight-fireman-55591
08/08/2023, 9:56 AMgoogle-native:container/v1beta1:Cluster
that worked fine, the cluster is up. THe nodepools are up.
The helm part of yaml was ignored and not installed on the cluster. What do I do wrong ?
The shortened version of my YAML:
gke-cluster-native:
type: google-native:container/v1beta1:Cluster
properties:
...
nginx-ingress:
type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
properties:
...
kuberay-operator:
type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
properties:
...
billowy-army-68599
straight-fireman-55591
08/08/2023, 1:11 PMgcp:container/cluster:Cluster
to google-native:container/v1beta1:Cluster
. I thought I could get the kubeconfig easier but I still can't figure this out.billowy-army-68599
straight-fireman-55591
08/08/2023, 1:15 PMbillowy-army-68599
straight-fireman-55591
08/08/2023, 1:26 PMbillowy-army-68599
straight-fireman-55591
08/08/2023, 1:36 PMbillowy-army-68599
straight-fireman-55591
08/08/2023, 1:38 PMbillowy-army-68599
straight-fireman-55591
08/08/2023, 1:53 PMbillowy-army-68599
straight-fireman-55591
08/08/2023, 2:23 PMbillowy-army-68599
straight-fireman-55591
08/08/2023, 2:42 PMbillowy-army-68599
ingressClassResource
"ingressClassResource": {
"name": "external",
"default": False,
"controllerValue": "<http://k8s.io/ingress-nginx/external|k8s.io/ingress-nginx/external>",
},
straight-fireman-55591
08/08/2023, 3:32 PMbillowy-army-68599
straight-fireman-55591
08/09/2023, 2:45 PMkubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
• rayoperator
• rayapiserver
After ensuring everything was working fine, I proceeded to add two kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
configurations for two nginx installations:
1. ingress-nginx
2. ingress-nginx-vpn
Configuration Details:
ingress-nginx:
type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
properties: # The arguments to resource properties.
chart: "ingress-nginx"
repositoryOpts:
repo: <https://kubernetes.github.io/ingress-nginx>
cleanupOnFail: true
createNamespace: true
lint: true
name: "ingress-nginx"
namespace: "ingress-nginx"
version: "4.7.1"
values:
ingressClassResource:
name: "internet"
enabled: true
controllerValue: "<http://k8s.io/internet|k8s.io/internet>"
ingressClass: "internet"
annotations: "<http://meta.helm.sh/release-namespace=ingress-nginx|meta.helm.sh/release-namespace=ingress-nginx>"
options:
provider: ${provider}
dependsOn:
- ${gke-cluster-native}
- ${gke-cluster-vpc}
parent: ${gke-cluster-native}
#
# Deploy nginx ingress controller for VPN access
#
ingress-nginx-vpn:
type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
properties: # The arguments to resource properties.
chart: "ingress-nginx"
repositoryOpts:
repo: <https://kubernetes.github.io/ingress-nginx>
cleanupOnFail: true
createNamespace: true
lint: true
name: "ingress-nginx-vpn"
namespace: "ingress-nginx-vpn"
version: "4.7.1"
values:
ingressClassResource:
name: "vpn"
enabled: true
controllerValue: "<http://k8s.io/vpn|k8s.io/vpn>"
ingressClass: "vpn"
annotations: "<http://meta.helm.sh/release-namespace=ingress-nginx-vpn|meta.helm.sh/release-namespace=ingress-nginx-vpn>"
options:
provider: ${provider}
dependsOn:
- ${gke-cluster-native}
- ${gke-cluster-vpc}
parent: ${gke-cluster-native}
Gitlab job log:
Notably, there was a warning indicating a new version of Pulumi, suggesting an upgrade from 3.77.1
to 3.78.1
.
Getting source from Git repository
Executing "step_script" stage of the job script
Activated service account credentials for: [REDACTED-@[MASKED].iam..com]
warning: A new version of Pulumi is available. To upgrade from version '3.77.1' to '3.78.1', visit <https://pulumi.com/docs/install/> for manual instructions and release notes.
$ pulumi stack select $STACK
$ pulumi config set google-native:project ${REDACTED_RND_GCP_PROJECT}
$ pulumi config set gcp:project ${REDACTED_RND_GCP_PROJECT}
$ pulumi config set gcp:region ${REDACTED_RND_GCP_REGION}
$ pulumi update --yes
Previewing update ([MASKED]):
[resource plugin google-native-0.31.1] installing
@ previewing update....[resource plugin kubernetes-4.0.3] installing
[resource plugin gcp-6.62.0] installing
.
pulumi:pulumi:Stack REDACTED-[MASKED] running
@ previewing update....
.
gcp:serviceAccount:Account gke-cluster-sa
gcp:compute:Network gke-cluster-vpc
gcp:compute:Subnetwork gke-cluster-primary-range-nodes
gcp:compute:Subnetwork gke-cluster-primary-range-services
gcp:compute:Subnetwork gke-cluster-primary-range-pods
google-native:container/v1beta1:Cluster gke-cluster-native
gcp:serviceAccount:Account gke-cluster-nodepool-sa
pulumi:providers:kubernetes provider
gcp:container:NodePool gke-[MASKED]-a
gcp:container:NodePool ray-2-[MASKED]-c
@ previewing update....
.
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-operator
@ previewing update....
.
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx create
@ previewing update....
.
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx-vpn create
@ previewing update....
.
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-apiserver
pulumi:pulumi:Stack REDACTED-[MASKED]
Resources:
+ 2 to create
13 unchanged
Updating ([MASKED]):
pulumi:pulumi:Stack REDACTED-[MASKED] running
@ updating....
.
gcp:serviceAccount:Account gke-cluster-sa
gcp:compute:Network gke-cluster-vpc
gcp:compute:Subnetwork gke-cluster-primary-range-pods
gcp:compute:Subnetwork gke-cluster-primary-range-nodes
gcp:compute:Subnetwork gke-cluster-primary-range-services
@ updating....
google-native:container/v1beta1:Cluster gke-cluster-native
gcp:serviceAccount:Account gke-cluster-nodepool-sa
pulumi:providers:kubernetes provider
gcp:container:NodePool gke-[MASKED]-a
gcp:container:NodePool ray-2-[MASKED]-c
@ updating....
.
.
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-operator
@ updating....
.
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx-vpn creating (0s)
@ updating....
.
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx creating (0s)
@ updating....
.
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-apiserver
@ updating....
.
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx creating (28s) warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: 1 error occurred:
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx creating (28s) error: 1 error occurred:
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx **creating failed** error: 1 error occurred:
@ updating....
.
+ kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx-vpn created (84s)
@ updating....
.
.
pulumi:pulumi:Stack REDACTED-[MASKED] running error: update failed
pulumi:pulumi:Stack REDACTED-[MASKED] **failed** 1 error
Diagnostics:
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (ingress-nginx):
warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: 1 error occurred:
* <http://ingressclasses.networking.k8s.io|ingressclasses.networking.k8s.io> "nginx" already exists
error: 1 error occurred:
* Helm release "ingress-nginx/ingress-nginx" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release ingress-nginx/ingress-nginx: 1 error occurred:
* <http://ingressclasses.networking.k8s.io|ingressclasses.networking.k8s.io> "nginx" already exists
pulumi:pulumi:Stack (REDACTED-[MASKED]):
error: update failed
Resources:
+ 1 created
13 unchanged
Duration: 1m36s
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Helm:
helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-nginx ingress-nginx 1 2023-08-14 10:25:10.346141657 +0000 UTC failed ingress-nginx-4.7.1 1.8.1
ingress-nginx-vpn ingress-nginx-vpn 1 2023-08-14 10:25:10.309215065 +0000 UTC deployed ingress-nginx-4.7.1 1.8.1
kuberay-apiserver ray-system 1 2023-08-10 16:28:36.23350083 +0000 UTC deployed kuberay-apiserver-0.5.0
kuberay-operator ray-system 1 2023-08-10 16:22:35.348352147 +0000 UTC deployed kuberay-operator-0.5.0
Pods:
kubectl get pods -A | grep nginx
ingress-nginx-vpn ingress-nginx-vpn-controller-6674596c84-2xc6c 1/1 Running 0 22m
ingress-nginx ingress-nginx-controller-5fcb5746fc-vr89d 1/1 Running 0 22m
Interestingly, despite the helm ingress-nginx
deployment failing, the corresponding pods are operating as expected.
Attempting to delete either nginx or the entire cluster presents issues due to this helm failure. For an effective stack destruction, Pulumi should prioritize deleting helm. If Pulumi deletes the GKE cluster prior to addressing helm, it could encounter errors because it won't recognize the helm release on a non-existent GKE.helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-nginx ingress-nginx 2 2023-08-14 12:17:44.146742142 +0000 UTC deployed ingress-nginx-4.7.1 1.8.1
ingress-nginx-vpn ingress-nginx-vpn 1 2023-08-14 10:25:10.309215065 +0000 UTC deployed ingress-nginx-4.7.1 1.8.1
kuberay-apiserver ray-system 1 2023-08-10 16:28:36.23350083 +0000 UTC deployed kuberay-apiserver-0.5.0
kuberay-operator ray-system 1 2023-08-10 16:22:35.348352147 +0000 UTC deployed kuberay-operator-0.5.0