This message was deleted.
# general
s
This message was deleted.
b
latest versions of the pulumi kubernetes provider shouldn't need helm, can you tell me a little more about what providers you're using?
c
Using Kubernetes and digitalocean @billowy-army-68599
b
can you share some code? your action for example? how have you configured pulumi to talk to those providers?
c
Sure, here is what my gitlab CI looks like:
Copy code
stages:
  - build_deployer
  - deploy

hasura_src:
  stage: build_deployer
  image:
    name: <http://gcr.io/kaniko-project/executor:debug|gcr.io/kaniko-project/executor:debug>
    entrypoint: [""]
  variables:
    WORKDIR: "$CI_PROJECT_DIR"
  script:
    - mkdir -p /kaniko/.docker
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
    - /kaniko/executor --context $WORKDIR --dockerfile $WORKDIR/Dockerfile --destination $CI_REGISTRY/leonard-cyber/$CI_PROJECT_NAME/pulumi:latest
  rules:
    - changes:
      - Dockerfile

pulumi:
  stage: deploy
  image: 
    name: $CI_REGISTRY/leonard-cyber/$CI_PROJECT_NAME/pulumi:latest
    entrypoint: [""]
  script:
      - bash ./scripts/setup.sh
  only:
    - global
The dockerfile in question:
Copy code
FROM pulumi/pulumi-go:latest
RUN pulumi plugin install resource kubernetes 2.8.1
RUN pulumi plugin install resource digitalocean 3.4.0
Sorry i bungled the CI paste >.<
Copy code
#!/bin/bash
# exit if a command returns a non-zero exit code and also print the commands and their args as they are executed
set -e -x
pulumi login
pulumi stack select -c "leonard-cyber/${CI_PROJECT_NAME}/${CI_COMMIT_REF_NAME}"
pulumi config set branch_name "${CI_COMMIT_REF_NAME}"
pulumi up -v=9 --debug --yes --non-interactive 2>&1
^ ./scripts/setup.sh
The job will run until gitlab finishes downloading the go modules and then just hang. @billowy-army-68599
Thank you for the help btw
b
so you're downloading the modules as part of the
pulumi up
?
c
It looks like it's downloading modules. I'm not sure what it's doing. Here is the actual pulumi output:
Copy code
pulumi:pulumi:Stack main-shared-global  go: downloading <http://github.com/jbenet/go-context|github.com/jbenet/go-context> v0.0.0-20150711004518-d14ea06fba99
    pulumi:pulumi:Stack main-shared-global  go: downloading <http://github.com/src-d/gcfg|github.com/src-d/gcfg> v1.4.0
    pulumi:pulumi:Stack main-shared-global  go: downloading <http://github.com/spf13/pflag|github.com/spf13/pflag> v1.0.5
    pulumi:pulumi:Stack main-shared-global  go: downloading <http://gopkg.in/warnings.v0|gopkg.in/warnings.v0> v0.1.2
And it will just halt after (I presume) the last module has been downloaded.
The output will halt, that is. The program just chugs along forever.
b
how long does it halt for? is there a CI timeout? i think it's failing while trying to compile your go binary
c
It will literally run for over an hour if I let it.
It used up a lot of CI minutes that way >.<
Also, this only happens during the CI process. If I do
pulumi up
on my local machine it will work fine. In case it really is failing to compile, how do I get it so that A. That doesn't happen, and B. When it happens it tells me instead of outputting nothing?
Here is my current pulumi code, if it helps. It's just one main.go file.
Copy code
package main

import (
	"<http://github.com/pulumi/pulumi-digitalocean/sdk/v3/go/digitalocean|github.com/pulumi/pulumi-digitalocean/sdk/v3/go/digitalocean>"
	"<http://github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes|github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes>"
	"<http://github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/helm/v3|github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/helm/v3>"
	"<http://github.com/pulumi/pulumi/sdk/v2/go/pulumi|github.com/pulumi/pulumi/sdk/v2/go/pulumi>"
	"<http://github.com/pulumi/pulumi/sdk/v2/go/pulumi/config|github.com/pulumi/pulumi/sdk/v2/go/pulumi/config>"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		if cluster, err := digitalocean.NewKubernetesCluster(ctx, "main", &digitalocean.KubernetesClusterArgs{
			Region:       pulumi.String("nyc1"),
			Version:      pulumi.String("1.20.2-do.0"),
			AutoUpgrade:  pulumi.Bool(true),
			SurgeUpgrade: pulumi.Bool(true),
			NodePool: digitalocean.KubernetesClusterNodePoolArgs{
				Name:      pulumi.String("autoscale-pool-01"),
				Size:      pulumi.String("s-4vcpu-8gb"),
				AutoScale: pulumi.Bool(true),
				MinNodes:  <http://pulumi.Int|pulumi.Int>(2),
				MaxNodes:  <http://pulumi.Int|pulumi.Int>(4),
			},
		}); err != nil {
			return err
		} else if _, err := kubernetes.NewProvider(ctx, "main", &kubernetes.ProviderArgs{
			Kubeconfig: cluster.KubeConfigs.ApplyString(
				func(config []digitalocean.KubernetesClusterKubeConfig) string {
					return *config[0].RawConfig
				},
			),
		}); err != nil {
			return err
		} else if _, err := helm.NewChart(ctx, "ingress-nginx", helm.ChartArgs{
			Chart:     pulumi.String("ingress-nginx"),
			Namespace: pulumi.String("ingress-nginx"),
			FetchArgs: helm.FetchArgs{
				Repo: pulumi.String("<https://kubernetes.github.io/ingress-nginx>"),
			},
		}); err != nil {
			return err
		} else {
			cfg := config.New(ctx, "digitalocean")
			ctx.Export("digitalOceanToken", cfg.RequireSecret("token"))
			ctx.Export("clusterName", cluster.Name)
			return nil
		}
	})
}
b
okay, so what's happening when you run
pulumi up
is it's doing
Copy code
go mod download
go build main.go
I think it's happening because of the second stage The kubernetes SDK is really quite large and takes quite a bit of memory. Can you try and create a distinct stage in your CI pipeline to build your pulumi program binary something like
Copy code
build:
  stage: build
  image: 
    name: $CI_REGISTRY/leonard-cyber/$CI_PROJECT_NAME/pulumi:latest
    entrypoint: [""]
  script:
      - go mod download
      - go build -o program
  only:
    - global
And see if that's the place where it hangs?
c
Sure, I can do that.
b
you can also then use that binary in your pulumi program (so it doesn't rebuild at compile time) by updating your
Pulumi.yaml
like this
Copy code
runtime:
  name: go
  options:
    binary: program
c
Would it be better to just run the entire thing in a dockerfile? The entire process?
It is indeed hanging after go build in the CI/CD pipeline, but not when I build the binary locally. Does this mean the problem is with pulumi-go's installation of the go toolchain or something? Do I need to upgrade to go 1.16?
b
my guess is that your runner doesn't have enough cores/memory to do the build
upgrading to go 1.16 has some memory improvements and uses CPU better
c
I wouldn't understand how anybody compiles anything on gitlab then
b
is this your own runner?
can you show me your
go.mod
and let me know which version of the kubernetes sdk you're using?
c
No, this is the shared, gitlab.com runner
This means I'm using 2.81, correct?
Copy code
module main-shared

go 1.14

require (
	<http://github.com/pulumi/pulumi-digitalocean/sdk/v3|github.com/pulumi/pulumi-digitalocean/sdk/v3> v3.4.0
	<http://github.com/pulumi/pulumi-kubernetes/sdk/v2|github.com/pulumi/pulumi-kubernetes/sdk/v2> v2.8.1
	<http://github.com/pulumi/pulumi/sdk/v2|github.com/pulumi/pulumi/sdk/v2> v2.21.2
)
b
that's right yeah, i just looked up the size of the shared runners and they seem quite small 😞 1 core, 4Gb memory. the Kubernetes SDK is fairly big, short of registering your own runner I'm not sure what to recommend
c
@billowy-army-68599 One more thing; can you look at my code and see if you can figure out why I can't install any helm charts? It's complaining about a lack of a kubeconfig, but I"m tryingt o pass it in from the DigitalOcean cluster resource
b
you haven't specified a resource provider, so your helm installation is using the ambient kubeconfig take a look here https://github.com/jaxxstorm/iac-in-go/blob/master/external-dns/main.go#L106-L109 and here https://github.com/jaxxstorm/iac-in-go/blob/master/external-dns/main.go#L135
c
@billowy-army-68599 That's not what I'm doing with kubernetes,NewProvider?
What does that do then?
b
yeah you've instantiated your provider, but you've never actually passed it to your resource. See L135 above
also, I see you're using a lot of
if
statements, you probably don't need those
c
I need them if I want to check the error values every function gives 😜
b
ha yeah true, 😄
c
;_;
You've been very helpful but tbh I'm running into another issue and at this point I've spent nearly ten hours just trying to create a kubernetes cluster in digital ocean and deploy a helm chart inside it
Copy code
error: the current deployment has 1 resource(s) with pending operations:
  * urn:pulumi:global::main-shared::kubernetes:core/v1:Namespace$kubernetes:<http://helm.sh/v3:Chart::ingress-nginx|helm.sh/v3:Chart::ingress-nginx>, interrupted while creating
And then when I manually delete the pending operations in the stack
Copy code
error: post-step event returned an error: failed to normalize URN references: Two resources ('urn:pulumi:global::main-shared::kubernetes:core/v1:Namespace$kubernetes:<http://helm.sh/v3:Chart::ingress-nginx|helm.sh/v3:Chart::ingress-nginx>' and 'urn:pulumi:global::main-sh
ared::kubernetes:<http://helm.sh/v3:Chart::ingress-nginx')|helm.sh/v3:Chart::ingress-nginx')> aliased to the same: 'urn:pulumi:global::main-shared::kubernetes:<http://helm.sh/v2:Chart::ingress-nginx|helm.sh/v2:Chart::ingress-nginx>'
b
interrupted while creating
did you send a kill to the update?
c
No, all that happened (at least as far as I can remember) was that it errored with the above. And this somehow corrupted my stack's state. I may be misremembering that part, but when I go into the DigitalOcean kubernetes dashboard I find no helm chart resources, yet when I go back and then remove those pending operations I just get that above error over and over.
Copy code
package main

import (
	"<http://github.com/pulumi/pulumi-digitalocean/sdk/v3/go/digitalocean|github.com/pulumi/pulumi-digitalocean/sdk/v3/go/digitalocean>"
	"<http://github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes|github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes>"
	corev1 "<http://github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/core/v1|github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/core/v1>"
	"<http://github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/helm/v3|github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/helm/v3>"
	metav1 "<http://github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/meta/v1|github.com/pulumi/pulumi-kubernetes/sdk/v2/go/kubernetes/meta/v1>"
	"<http://github.com/pulumi/pulumi/sdk/v2/go/pulumi|github.com/pulumi/pulumi/sdk/v2/go/pulumi>"
	"<http://github.com/pulumi/pulumi/sdk/v2/go/pulumi/config|github.com/pulumi/pulumi/sdk/v2/go/pulumi/config>"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		if cluster, err := digitalocean.NewKubernetesCluster(ctx, "main", &digitalocean.KubernetesClusterArgs{
			Region:       pulumi.String("nyc1"),
			Version:      pulumi.String("1.20.2-do.0"),
			AutoUpgrade:  pulumi.Bool(true),
			SurgeUpgrade: pulumi.Bool(true),
			NodePool: digitalocean.KubernetesClusterNodePoolArgs{
				Name:      pulumi.String("autoscale-pool-01"),
				Size:      pulumi.String("s-4vcpu-8gb"),
				AutoScale: pulumi.Bool(true),
				MinNodes:  <http://pulumi.Int|pulumi.Int>(2),
				MaxNodes:  <http://pulumi.Int|pulumi.Int>(4),
			},
		}); err != nil {
			return err
		} else if kubep, err := kubernetes.NewProvider(ctx, "main", &kubernetes.ProviderArgs{
			Kubeconfig: cluster.KubeConfigs.ApplyString(
				func(config []digitalocean.KubernetesClusterKubeConfig) string {
					return *config[0].RawConfig
				},
			),
		}); err != nil {
			return err
		} else if gitlabNS, err := corev1.NewNamespace(ctx, "gitlab", &corev1.NamespaceArgs{
			Metadata: &metav1.ObjectMetaArgs{
				Name: pulumi.String("gitlab"),
			},
		}, pulumi.Provider(kubep)); err != nil {
			return err
		} else if _, err := helm.NewChart(ctx, "gitlab-runner", helm.ChartArgs{
			Chart:     pulumi.String("gitlab-runner"),
			Namespace: pulumi.String("gitlab"),
			FetchArgs: helm.FetchArgs{
				Repo: pulumi.String("<https://charts.gitlab.io/>"),
			},
		}, pulumi.Provider(kubep), pulumi.Parent(gitlabNS)); err != nil {
			return err
		} else if nginxNS, err := corev1.NewNamespace(ctx, "nginx", &corev1.NamespaceArgs{
			Metadata: &metav1.ObjectMetaArgs{
				Name: pulumi.String("nginx"),
			},
		}, pulumi.Provider(kubep)); err != nil {
			return err
		} else if _, err := helm.NewChart(ctx, "ingress-nginx", helm.ChartArgs{
			Chart:     pulumi.String("ingress-nginx"),
			Namespace: pulumi.String("nginx"),
			FetchArgs: helm.FetchArgs{
				Repo: pulumi.String("<https://kubernetes.github.io/ingress-nginx>"),
			},
		}, pulumi.Provider(kubep), pulumi.Parent(nginxNS)); err != nil {
			return err
		} else {
			cfg := config.New(ctx, "digitalocean")
			ctx.Export("digitalOceanToken", cfg.RequireSecret("token"))
			ctx.Export("clusterName", cluster.Name)
			return nil
		}
	})
}
b
it looks like you have duplicate resources in your state now, you'll need to export it and fix it up 😞
c
😞
It's starting to seem like whatever design decisions you guys are making, they might make sense internally or for some advanced use case, but somehow they just multiply and I've managed to run into like six footguns at once
No offense, I know I'm getting this support for free (for now) and I'm grateful
b
i'm sorry you're having a difficult time, we do try our best to make this as easy as possible
i'm happy to jump on a call tomorrow to see if we can get you to a place where things work
c
That'd be awesome tbh
I think I'm just going to destroy everything in the cluster now and try to start over After I provision a runner I guess we'll just try to keep everything within CI/CD so interrupts cant happen
b
destroying will get you back on track, I'll dm you a calendly link