With pulumi operator v2, the workspace/statefulset...
# kubernetes
h
With pulumi operator v2, the workspace/statefulset get stuck in an ~infinite update loop if the stack config is specified in the Custom Resource. What I observed is the workspace pod gets a
SetAllConfig
call, followed immediately by a
cmd.serve.grpc	shutting down the server
The operator will keep getting
context deadline exceeded
, and then
Reconciling Workspace
with a different
revision
It seems the workspace keeps getting updates, resulting in a different revision number, and that update gets picked up the operator, which will update the statefulset, thus killing the pods. Does this sound possible? Is there any other guide on debugging this?
h
could you include some logs and example of the stack resource? are you setting a large number of configs?
h
How do I enable logs? I was able to enable logs in the operator pod, but not in the workspace pod. I tried manually changing the statefulset to add
"-v=true"
, but then the operator complain about cannot update the statefulset. I'm setting 6 configs (6 strings)
h
the operator and workspace pods should already emit some logs by default, you can grab them with
kubectl logs
h
The workspace pod logs don't currently show anything useful... It's just • start server • installing dependency • got a
SelectStack
• got a
SetAllConfig
• shutting down server
h
those logs could still be helpful!
h
This is the operator log
This is the workspace pod log
h
from the operator log
No source specified
what does your stack look like? it sounds like it isn’t cloning anything into the workspace
h
It's local only, with a prebuilt go binary
Copy code
runtime:
  name: go
  options:
    binary: /main
The binary is in the image.