trying out the new operator v2 as well, with beta ...
# pulumi-kubernetes-operator
g
trying out the new operator v2 as well, with beta 2 and noticing the workspace pods are not getting past bootstrap. Which i believe it is just doing "pulumi-kubernetes-operator cp /agent /tini /share/" why would it not be completing and then going to fetch?
│ {"level":"error","ts":1731518999.1037612,"logger":"cmd","msg":"Failed to get watch namespace","error":"WATCH_NAMESPACE must be set","stacktrace":"main.main\n\t/home/runner/work/pulumi-kubernetes-operator/pulumi-kubernetes-operator/cmd/manager/main.go:88\nruntime.main\n\t/opt/hostedtoolcache/go/1.21.13/x6 │
Copy code
- name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
that is required on the bootstrap image
h
hi @glamorous-family-11920 can i ask how you installed the v2 operator?
g
was installing an old kustomize apply method. Testing out the latest helm chart now. Main thing for me is that I need to override the images including the boostrap and the fetch images.
running into small prob right now:
Copy code
│ pulumi E1113 18:31:29.194478       7 webhook.go:154] Failed to make webhook authenticator request: <http://tokenreviews.authentication.k8s.io|tokenreviews.authentication.k8s.io> is forbidden: User "system:serviceaccount:pulumi-operator:pulumi-operator-pulumi-kubernetes-operator" cannot create resource "tokenreviews" in API group "authentication.k8 │
I'm overriding the OPERATOR_NAME in the env vars, checking that now.
- name: OPERATOR_NAME value: "pulumi-kubernetes-operator"
getting closer now, i'll give update shortly. initially tried a rebuild image to reduce CVEs, jsut trying to get the upstream image working now
Copy code
pulumi-operator:pulumi-operator-pulumi-kubernetes-operator:system:auth-delegator
that isn't set in the chart
Copy code
│     apiVersion: <http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>                                                                                                                                                                                                                                                                      │
│     kind: ClusterRoleBinding                                                                                                                                                                                                                                                                                      │
│     metadata:                                                                                                                                                                                                                                                                                                     │
│       name: pulumi-operator:pulumi-operator-pulumi-kubernetes-operator:system:auth-delegator                                                                                                                                                                                                                      │
│     roleRef:                                                                                                                                                                                                                                                                                                      │
│       apiGroup: <http://rbac.authorization.k8s.io|rbac.authorization.k8s.io>                                                                                                                                                                                                                                                                         │
│       kind: ClusterRole                                                                                                                                                                                                                                                                                           │
│       name: system:auth-delegator
when cluster crds are turned on
h
does a vanilla install work for you?
Copy code
kubectl apply -f <https://raw.githubusercontent.com/pulumi/pulumi-kubernetes-operator/refs/tags/v2.0.0-beta.2/deploy/yaml/install.yaml>
g
would need to pass in values for private images. can't use upstream directly
need to add -n to that kubectl apply and also serverside because those CRDs are too big for client apply
that apply doesn't work with passing namespace completely
for the workspace pods, can you use the operator image or does it have to be pulumi:latest-nonroot?
also the workspace pod shoudn't be attempting to download dependencies.
Copy code
error: installing dependencies: determining go version: exit st....
Copy code
/share/tini /share/agent -- serve --workspace /share/workspace --skip-install --auth-mode kube --kube-workspace-namespace pulumi-operator --kube-workspace-name keyumi-dev
wish i knew what that was doing, switching back to helm
Copy code
helm install pulumi-operator -n pulumi-operator <oci://myversion/charts/pulumi-kubernetes-operator> --version 2.0.0  -f values.yaml
h
for the workspace pods, can you use the operator image or does it have to be pulumi:latest-nonroot?
the operator image doesn’t contain the pulumi cli or language runtimes. if you need to customize the image we suggest using pulumi:latest-nonroot as a base.
also the workspace pod shoudn’t be attempting to download dependencies.
please weigh in on https://github.com/pulumi/pulumi-kubernetes-operator/issues/374 if that’s something you’d still like to see.
g
just using a copy of the latest-nonroot for now. need to figure out a way around the dependencies issue
h
what is the problem you’re seeing? it’s essentially just running
pulumi install
in your stack’s directory to get relevant language plugins. does that succeed for you locally?
g
error: installing dependencies: determining go version: exit status 1
go mod tidy is working in /share/source folder when i override the entrypoint to poke around after the initContianers
h
i would double check that
repoDir
points to the correct path for your stack.
what about
pulumi install
?
g
i'm running pulumi install within that dir and got that error
maybe missing a env var?
h
is your stack written in go?
g
i am using GOPRIVATE and GONOSUMDB vars with a netrc for local libraries
yes
but it isn't go-ing.
🙂
i bet netrc stuff isn't there
it did pull the repo in though
h
what does
go version
say, and
which go
g
1.22
Copy code
/usr/local/go/bin/go
just tried:
Copy code
pulumi install --logtostderr --logflow -v=9
Copy code
I1113 20:05:26.446822      14 plugins.go:1896] GetPluginPath(language, go, <nil>): found on $PATH /usr/bin/pulumi-language-go
I1113 20:05:26.446874      14 plugins.go:1919] GetPluginPath(language, go, <nil>): found next to current executable /usr/bin/pulumi-language-go
I1113 20:05:26.446899      14 plugin.go:189] newPlugin(): Launching plugin 'go' from '/usr/bin/pulumi-language-go' with args: -root=/share/source,127.0.0.1:38575
I1113 20:05:26.466380      14 langruntime_plugin.go:251] langhost[go].GetPluginInfo() executing
I1113 20:05:26.466720      14 langruntime_plugin.go:290] langhost[go].InstallDependencies(Info=[root=/share/source, program=/share/source, entryPoint=.], UseLanguageVersionTools=false) executing
I1113 20:05:26.467528      14 sink.go:170] defaultSink::Infoerr(I1113 20:05:26.467466      32 executable.go:75] program go found in $PATH
)
I1113 20:05:26.467466      32 executable.go:75] program go found in $PATH

Installing dependencies...

I1113 20:05:26.916998      14 langruntime_plugin.go:325] langhost[go].InstallDependencies(Info=[root=/share/source, program=/share/source, entryPoint=.], UseLanguageVersionTools=false) failed: err=determining go version: exit status 1
I1113 20:05:26.918783      14 sink.go:178] defaultSink::Error(error: installing dependencies: determining go version: exit status 1)
error: installing dependencies: determining go version: exit status 1
pulumi refresh finds the stack, so thats working
go env -w GOPRIVATE="example.io/*" go env -w GONOSUMDB="example.io"
able to create stacks and deleted them. So backend state is working, it just doesn't like pulumi install
very strange. "Fixed it by editing the go.mod file go 1.22 go 1.22.0"
f
great success
g
for image overrides i needed to do this to the stack spec:
Copy code
workspaceTemplate:
    spec:
      image: localreg/pulumi/pulumi-kubernetes-operator:2.0.0-beta.2
      podTemplate:
        spec:
          containers:
          - image: localreg/pulumi/pulumi-kubernetes-operator:2.0.0-beta.2
            name: pulumi
          initContainers:
          - image: localreg/pulumi/pulumi-kubernetes-operator:2.0.0-beta.2
            name: bootstrap
          - image: localreg/pulumi/pulumi-kubernetes-operator:2.0.0-beta.2
            name: fetch
the pulumi/pulumi:latest-nonroot image is huge btw.
h
> the pulumi/pulumi:latest-nonroot image is huge btw. we know 🙂 https://github.com/pulumi/pulumi-docker-containers/issues/308
g
very nice
I see, so that is all the languages. Could also reduce the image layers to one with building it with apko and melange. Or some people build images with nix with similar effect. Just a suggestion
there were a lot of layers and makes more sense knowing it supported everything
h
“Fixed it by editing the go.mod file go 1.22 go 1.22.0”
not surprising — after go 1.21 this directive needs to point at an actual release identifier (including patch) for toolchain support.
g
wish there was support for passing in vars globally that apply to all workspace pods. I know that can be set in the stacks themselves but the helm chart doesn't have a way to do that as applied to all stacks.
h
that’s a great suggestion, if you have a moment could you give us an issue? https://github.com/pulumi/pulumi-kubernetes-operator/issues/new?template=1-feature-request.md
g