https://pulumi.com logo
#yaml
Title
# yaml
s

straight-fireman-55591

08/08/2023, 9:56 AM
hi, I created a new gke cluster using
google-native:container/v1beta1:Cluster
that worked fine, the cluster is up. THe nodepools are up. The helm part of yaml was ignored and not installed on the cluster. What do I do wrong ? The shortened version of my YAML:
Copy code
gke-cluster-native:
    type: google-native:container/v1beta1:Cluster
    properties:
    ...

nginx-ingress:
    type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
    properties:
    ...

kuberay-operator:
    type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
    properties:
    ...
b

billowy-army-68599

08/08/2023, 12:39 PM
@straight-fireman-55591 can you share the whole code/definition?
s

straight-fireman-55591

08/08/2023, 1:11 PM
Here it is. Thank you.
I changed from
gcp:container/cluster:Cluster
to
google-native:container/v1beta1:Cluster
. I thought I could get the kubeconfig easier but I still can't figure this out.
b

billowy-army-68599

08/08/2023, 1:14 PM
give me a few
s

straight-fireman-55591

08/08/2023, 1:15 PM
6:14 AM local time. It is early for you 🙂
b

billowy-army-68599

08/08/2023, 1:20 PM
always nice to start the day with some GKE and YAML 🙂
s

straight-fireman-55591

08/08/2023, 1:26 PM
definitely. Better than deleting a production database
b

billowy-army-68599

08/08/2023, 1:34 PM
I think I see the issue here, it looks like calling functions on resources in yaml isn’t possible and/or well documented. with GKE generally, it looks like kubeconfig resources aren’t generated. in Programming languages this isn’t an issue, becausr you can just take the outputs and pass it to an intermediate variable. You can’t really do this in YAML. I’m going to file an issue and check with the team to see if there’s a workaround
s

straight-fireman-55591

08/08/2023, 1:36 PM
my workaround was to use helm in a separate step to install it but it would be nice to do it all in one place.
s

straight-fireman-55591

08/08/2023, 1:38 PM
I don't really have a reason to use anything else than yaml to be honest. The code is nice, simple, easy to read and always works. There are no python libraries to worry about.
https://github.com/pulumi/pulumi-google-native/issues/709 this also affects the classic module. If this is still supported.
so for now there is no way right ?
b

billowy-army-68599

08/08/2023, 1:50 PM
it doesn’t seem so, no 😞
actually, let me see if there’s an option
s

straight-fireman-55591

08/08/2023, 1:53 PM
thank you
I think that exist for eks clusters
b

billowy-army-68599

08/08/2023, 2:19 PM
@straight-fireman-55591 here you go 🙂 https://github.com/jaxxstorm/pulumi-examples/blob/main/yaml/gcp/gke/Pulumi.yaml You can build the kubeconfig as a JSON object and then pass it to the provider
s

straight-fireman-55591

08/08/2023, 2:23 PM
let me check
b

billowy-army-68599

08/08/2023, 2:26 PM
definitely works, I just ran it ;-D
s

straight-fireman-55591

08/08/2023, 2:42 PM
it works! Thank you
Errors regarding helm but that;s on me
There is an error I logged here about installing a secondary nginx. https://github.com/pulumi/pulumi-kubernetes/issues/2500 Is that something you can check or should I wait until someone update the ticket ?
b

billowy-army-68599

08/08/2023, 3:18 PM
@straight-fireman-55591 i think that’s because you haven’t set:
ingressClassResource
Copy code
"ingressClassResource": {
                    "name": "external",
                    "default": False,
                    "controllerValue": "<http://k8s.io/ingress-nginx/external|k8s.io/ingress-nginx/external>",
                },
s

straight-fireman-55591

08/08/2023, 3:32 PM
I will check this, thanks
b

billowy-army-68599

08/08/2023, 3:44 PM
@straight-fireman-55591 did you check that issue comment? does this work?
s

straight-fireman-55591

08/09/2023, 2:45 PM
I did, it doesn't, Im testing more. I want to test more before I reply
Hi, I began with a clean slate, erasing everything in Google Cloud, which includes both the stack and the pulumi state file. I then deployed the following: • vpc • gke • nodes ◦ I deployed both below using
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
• rayoperator • rayapiserver After ensuring everything was working fine, I proceeded to add two
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
configurations for two nginx installations: 1. ingress-nginx 2. ingress-nginx-vpn Configuration Details:
Copy code
ingress-nginx:
    type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
    properties: # The arguments to resource properties.
      chart: "ingress-nginx"
      repositoryOpts:
        repo: <https://kubernetes.github.io/ingress-nginx>
      cleanupOnFail: true
      createNamespace: true
      lint: true
      name: "ingress-nginx"
      namespace: "ingress-nginx"
      version: "4.7.1"
      values:
        ingressClassResource:
          name: "internet"
          enabled: true
          controllerValue: "<http://k8s.io/internet|k8s.io/internet>"
        ingressClass: "internet"
        annotations: "<http://meta.helm.sh/release-namespace=ingress-nginx|meta.helm.sh/release-namespace=ingress-nginx>"
    options:
      provider: ${provider}
      dependsOn:
      - ${gke-cluster-native}
      - ${gke-cluster-vpc}
      parent: ${gke-cluster-native}

  #
  # Deploy nginx ingress controller for VPN access
  #  
  ingress-nginx-vpn:
    type: kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release>
    properties: # The arguments to resource properties.
      chart: "ingress-nginx"
      repositoryOpts:
        repo: <https://kubernetes.github.io/ingress-nginx>
      cleanupOnFail: true
      createNamespace: true
      lint: true
      name: "ingress-nginx-vpn"
      namespace: "ingress-nginx-vpn"
      version: "4.7.1"
      values:
        ingressClassResource:
          name: "vpn"
          enabled: true
          controllerValue: "<http://k8s.io/vpn|k8s.io/vpn>"
        ingressClass: "vpn"
        annotations: "<http://meta.helm.sh/release-namespace=ingress-nginx-vpn|meta.helm.sh/release-namespace=ingress-nginx-vpn>"
    options:
      provider: ${provider}
      dependsOn:
      - ${gke-cluster-native}
      - ${gke-cluster-vpc}
      parent: ${gke-cluster-native}
Gitlab job log: Notably, there was a warning indicating a new version of Pulumi, suggesting an upgrade from
3.77.1
to
3.78.1
.
Copy code
Getting source from Git repository
Executing "step_script" stage of the job script
Activated service account credentials for: [REDACTED-@[MASKED].iam..com]
warning: A new version of Pulumi is available. To upgrade from version '3.77.1' to '3.78.1', visit <https://pulumi.com/docs/install/> for manual instructions and release notes.
$ pulumi stack select $STACK
$ pulumi config set google-native:project ${REDACTED_RND_GCP_PROJECT}
$ pulumi config set gcp:project ${REDACTED_RND_GCP_PROJECT}
$ pulumi config set gcp:region ${REDACTED_RND_GCP_REGION}
$ pulumi update --yes
Previewing update ([MASKED]):
[resource plugin google-native-0.31.1] installing
@ previewing update....[resource plugin kubernetes-4.0.3] installing
[resource plugin gcp-6.62.0] installing
.
    pulumi:pulumi:Stack REDACTED-[MASKED] running 
@ previewing update....
.
    gcp:serviceAccount:Account gke-cluster-sa  
    gcp:compute:Network gke-cluster-vpc  
    gcp:compute:Subnetwork gke-cluster-primary-range-nodes  
    gcp:compute:Subnetwork gke-cluster-primary-range-services  
    gcp:compute:Subnetwork gke-cluster-primary-range-pods  
    google-native:container/v1beta1:Cluster gke-cluster-native  
    gcp:serviceAccount:Account gke-cluster-nodepool-sa  
    pulumi:providers:kubernetes provider  
    gcp:container:NodePool gke-[MASKED]-a  
    gcp:container:NodePool ray-2-[MASKED]-c  
@ previewing update....
.
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-operator  
@ previewing update....
.
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx create 
@ previewing update....
.
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx-vpn create 
@ previewing update....
.
kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-apiserver  
pulumi:pulumi:Stack REDACTED-[MASKED]  
Resources:
    + 2 to create
    13 unchanged
Updating ([MASKED]):
pulumi:pulumi:Stack REDACTED-[MASKED] running 
@ updating....
.
    gcp:serviceAccount:Account gke-cluster-sa  
    gcp:compute:Network gke-cluster-vpc  
    gcp:compute:Subnetwork gke-cluster-primary-range-pods  
    gcp:compute:Subnetwork gke-cluster-primary-range-nodes  
    gcp:compute:Subnetwork gke-cluster-primary-range-services  
@ updating....
    google-native:container/v1beta1:Cluster gke-cluster-native  
    gcp:serviceAccount:Account gke-cluster-nodepool-sa  
    pulumi:providers:kubernetes provider  
    gcp:container:NodePool gke-[MASKED]-a  
    gcp:container:NodePool ray-2-[MASKED]-c  
@ updating....
.
.
    kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-operator  
@ updating....
.
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx-vpn creating (0s) 
@ updating....
.
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx creating (0s) 
@ updating....
.
    kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> kuberay-apiserver  
@ updating....
.
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx creating (28s) warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: 1 error occurred:
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx creating (28s) error: 1 error occurred:
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx **creating failed** error: 1 error occurred:
@ updating....
.
 +  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> ingress-nginx-vpn created (84s) 
@ updating....
.
.
    pulumi:pulumi:Stack REDACTED-[MASKED] running error: update failed
    pulumi:pulumi:Stack REDACTED-[MASKED] **failed** 1 error
Diagnostics:
  kubernetes:<http://helm.sh/v3:Release|helm.sh/v3:Release> (ingress-nginx):
    warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then retry. Reason: 1 error occurred:
        * <http://ingressclasses.networking.k8s.io|ingressclasses.networking.k8s.io> "nginx" already exists
    error: 1 error occurred:
        * Helm release "ingress-nginx/ingress-nginx" was created, but failed to initialize completely. Use Helm CLI to investigate.: failed to become available within allocated timeout. Error: Helm Release ingress-nginx/ingress-nginx: 1 error occurred:
        * <http://ingressclasses.networking.k8s.io|ingressclasses.networking.k8s.io> "nginx" already exists
  pulumi:pulumi:Stack (REDACTED-[MASKED]):
    error: update failed
Resources:
    + 1 created
    13 unchanged
Duration: 1m36s
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Helm:
Copy code
helm ls -A
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
ingress-nginx           ingress-nginx           1               2023-08-14 10:25:10.346141657 +0000 UTC failed          ingress-nginx-4.7.1     1.8.1      
ingress-nginx-vpn       ingress-nginx-vpn       1               2023-08-14 10:25:10.309215065 +0000 UTC deployed        ingress-nginx-4.7.1     1.8.1      
kuberay-apiserver       ray-system              1               2023-08-10 16:28:36.23350083 +0000 UTC  deployed        kuberay-apiserver-0.5.0            
kuberay-operator        ray-system              1               2023-08-10 16:22:35.348352147 +0000 UTC deployed        kuberay-operator-0.5.0
Pods:
Copy code
kubectl get pods -A | grep nginx
ingress-nginx-vpn   ingress-nginx-vpn-controller-6674596c84-2xc6c                    1/1     Running   0               22m
ingress-nginx       ingress-nginx-controller-5fcb5746fc-vr89d                        1/1     Running   0               22m
Interestingly, despite the helm
ingress-nginx
deployment failing, the corresponding pods are operating as expected. Attempting to delete either nginx or the entire cluster presents issues due to this helm failure. For an effective stack destruction, Pulumi should prioritize deleting helm. If Pulumi deletes the GKE cluster prior to addressing helm, it could encounter errors because it won't recognize the helm release on a non-existent GKE.
I havent tried deploying both helms for nginx using only helm.
When i run the update again, it fixed the failed helm deployment 🤷‍♀️
Copy code
helm ls -A
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
ingress-nginx           ingress-nginx           2               2023-08-14 12:17:44.146742142 +0000 UTC deployed        ingress-nginx-4.7.1     1.8.1      
ingress-nginx-vpn       ingress-nginx-vpn       1               2023-08-14 10:25:10.309215065 +0000 UTC deployed        ingress-nginx-4.7.1     1.8.1      
kuberay-apiserver       ray-system              1               2023-08-10 16:28:36.23350083 +0000 UTC  deployed        kuberay-apiserver-0.5.0            
kuberay-operator        ray-system              1               2023-08-10 16:22:35.348352147 +0000 UTC deployed        kuberay-operator-0.5.0
3 Views