bitter-carpenter-93554
09/27/2022, 9:09 PMsparse-intern-71089
10/04/2022, 7:25 PMbitter-carpenter-93554
10/06/2022, 1:13 AMOct 5 00:07:59 sj1010010242096 k3s[350891]: I1005 00:07:59.811609 350891 kubelet_pods.go:888] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/pulumi-k8s-operator-92ajub40-6ff76b67c5-fflmm" secret="" err="secret \"pulumi-kubernetes-operator\" not found"
bitter-carpenter-93554
10/06/2022, 1:14 AMpulumi-kubernetes-operator
SECRET which was not created? If it should be created how to do that?bitter-carpenter-93554
10/25/2022, 1:51 AMrefresh
for Stack is set to true
. Operator logs are flooded with errors:
{"level":"error","ts":"2022-10-17T21:43:23.829Z","logger":"controller_stack","msg":"No permalink found.","Request.Namespace":"pulumi","Request.Name":"s3-app1.dev.global-535d1cd0","Namespace":"pulumi","error":"failed to get permalink","errorVerbose":"failed to get permalink\<http://ngithub.com/pulumi/pulumi/sdk/v3/go/auto.init\n\t/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.39.3/go/auto/stack.go:766\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6321\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6298\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6298\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6298\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:233\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/asm_amd64.s:1594%22,%22stacktrace%22:%22github.com/pulumi/pulumi-kubernetes-operator/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/home/runner/work/pulumi-kubernetes-operator/pulumi-kubernetes-operator/pkg/controller/stack/stack_controller.go:384\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:214%22}|ngithub.com/pulumi/pulumi/sdk/v3/go/auto.init\n\t/home/runner/go/pkg/mod/github.com/pulumi/pulumi/sdk/v3@v3.39.3/go/auto/stack.go:766\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6321\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6298\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6298\nruntime.doInit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:6298\nruntime.main\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/proc.go:233\nruntime.goexit\n\t/opt/hostedtoolcache/go/1.19.1/x64/src/runtime/asm_amd64.s:1594","stacktrace":"github.com/pulumi/pulumi-kubernetes-operator/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/home/runner/work/pulumi-kubernetes-operator/pulumi-kubernetes-operator/pkg/controller/stack/stack_controller.go:384\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.0/pkg/internal/controller/controller.go:214"}>
bitter-carpenter-93554
10/25/2022, 1:51 AMbitter-carpenter-93554
10/27/2022, 9:33 PMerror: could not create stack: the stack is currently locked by 1 lock(s). Either wait for the other process(es) to end or delete the lock file with `pulumi cancel`.
<azblob://state/.pulumi/locks/pulumi-operator.dev.global/ab367336-f13c-4f77-9af0-1a5517b35a97.json>: created by pulumi-kubernetes-operator@pulumi-kubernetes-operator-6677a05e-5955496654-ps5t2 (pid 478) at 2022-10-27T20:36:29Z
bitter-carpenter-93554
10/27/2022, 9:33 PMpulumi-kubernetes-operator-6677a05e-5955496654-ps5t2
is already not a leader or was terminatedbitter-carpenter-93554
10/27/2022, 9:36 PMbest-winter-27868
11/03/2022, 10:26 PMsparse-intern-71089
11/04/2022, 10:34 PMbitter-carpenter-93554
11/07/2022, 10:44 PMleaseDurationSeconds
for pulumi-kubernetes-operator-lock
?
# kubectl get <http://leases.coordination.k8s.io|leases.coordination.k8s.io> pulumi-kubernetes-operator-lock -o yaml -n pulumi
apiVersion: <http://coordination.k8s.io/v1|coordination.k8s.io/v1>
kind: Lease
metadata:
...
spec:
acquireTime: "2022-11-07T22:33:44.000000Z"
holderIdentity: pulumi-kubernetes-operator-6677a05e-76c89475c9-pxhjj_921c1d24-8976-4c65-974c-f261dda2bf9c
leaseDurationSeconds: 15
leaseTransitions: 100
renewTime: "2022-11-07T22:41:56.961841Z"
wide-vase-68248
11/11/2022, 1:08 AMbitter-carpenter-93554
11/14/2022, 11:05 PMprehistoric-london-9917
12/05/2022, 7:52 PMStacks
and Programs
to be deployed.
Is there a way to have one deployment of the operator that monitors all the cluster namespaces for Stacks
and Programs
? Seems a waste of resources to have to deploy it everywhere. Or am I missing something?wide-vase-68248
12/07/2022, 12:07 PMloud-energy-23510
12/16/2022, 8:15 AMquiet-wolf-18467
purple-beach-36424
02/13/2023, 11:27 PMcrd2pulumi
cli (to a nodejs project) and there is a weird behavior that I can’t get to wrap my head around:
When previewing a deployement, Pulumi tries to install pulumi resource kubernetes
with the version of my custom plugin… even tho the plugin should already have been installed.
@ previewing update....
pulumi:providers:kubernetes default_0_0_14_c049b797_0 error: Could not automatically download and install resource plugin 'pulumi-resource-kubernetes' at version v0.0.14-c049b797.0, install the plugin using `pulumi plugin install resource kubernetes v0.0.14-c049b797.0`.
pulumi:providers:kubernetes default_0_0_14_c049b797_0 1 error
pulumi:pulumi:Stack prometheus-sandbox11.us-east-1
Diagnostics:
pulumi:providers:kubernetes (default_0_0_14_c049b797_0):
error: Could not automatically download and install resource plugin 'pulumi-resource-kubernetes' at version v0.0.14-c049b797.0, install the plugin using `pulumi plugin install resource kubernetes v0.0.14-c049b797.0`.
Underlying error: error downloading plugin kubernetes to file: failed to download plugin: kubernetes-0.0.14-c049b797.0: 403 HTTP error fetching plugin from <https://get.pulumi.com/releases/plugins/pulumi-resource-kubernetes-v0.0.14-c049b797.0-linux-amd64.tar.gz>
Obviously this won’t work but it’s very unclear where the heck does Pulumi thinks it needs to re-install the version of the kubernetes plugin and why on earth would that link with my crd version 🤷♂️ 😅
Anyone knows how to resolve this?magnificent-address-3498
05/12/2023, 7:22 AMhelpful-baker-3769
08/14/2023, 5:59 AMProgram
CR? I have this in my `Stack`:
---
apiVersion: <http://pulumi.com/v1|pulumi.com/v1>
kind: Stack
metadata:
name: github
spec:
stack: budimanjojo/github
programRef:
name: github
destroyOnFinalize: true
envRefs:
GITHUB_OWNER:
type: Literal
literal:
value: budimanjojo
GITHUB_TOKEN:
type: Secret
secret:
name: github-secret
key: GITHUB_TOKEN
PULUMI_ACCESS_TOKEN:
type: Secret
secret:
name: github-secret
key: PULUMI_ACCESS_TOKEN
secretsRef:
test_secret:
type: Secret
secret:
name: github-secret
key: test_secret
in my Program
, I want to access the test_secret
but I got this error: resource "test_secret" not found
. Here's the Program:
---
apiVersion: <http://pulumi.com/v1|pulumi.com/v1>
kind: Program
metadata:
name: github
program:
resources:
exampleVariable:
type: github:ActionsSecret
properties:
repository: pulumi-testing
secretName: test_secret
encryptedValue: ${test_secret}
billions-yak-67755
09/01/2023, 2:12 PMprehistoric-author-47929
09/26/2023, 5:22 AMbillions-yak-67755
10/16/2023, 4:41 PMpulumi import
with the Kubernetes operator? Exec into the container? Try to build an artificial local environment that targets the same state?great-restaurant-2080
11/30/2023, 7:20 PMFailed to initialize stack: installing project dependencies: exit status 1
gifted-park-20161
01/15/2024, 1:09 AMgifted-park-20161
01/15/2024, 1:45 AMstrong-microphone-65970
01/30/2024, 9:45 PMkubernetes:context: <Cluster Context>
in the stack to specify which cluster to make these changes to. However, when trying to use the operator to control this stack in comes back with the following error:
context "<Cluster Context>" does not exist
Is it possible to add the cluster context to the operator so that it is able to find that cluster?calm-dog-66392
02/03/2024, 2:33 AMworried-helmet-23171
02/15/2024, 6:53 PM