Having trouble with our current state (paid custom...
# kubernetes
s
Having trouble with our current state (paid customer) - we’re using a Helm Chart to install Harbor in our EKS cluster. This chart generates several
Secret
s used for encryption and mounts those as volumes. At some point during an update, the `Secret`s are no longer in the k8s cluster but Pulumi believes they are. These missing `Secret`s are preventing pods from starting and the pods are stuck in the creation state (
ContainerCreating
or
CreateContainerConfigError
). I’ve attempted refreshing multiple times and Pulumi still believes these `Secret`s are there and are never updated in the state. I did find these entries in the job run logs from a job I ran to Preview stack changes a few hours ago (we’re using GH Actions):
Copy code
-- kubernetes:core/v1:Secret ***/***-dev-***-jobservice delete original 
   +- kubernetes:core/v1:Secret ***/***-dev-***-jobservice replace [diff: ~data]
   ++ kubernetes:core/v1:Secret ***/***-dev-***-jobservice create replacement [diff: ~data]
      kubernetes:core/v1:Secret ***/***-dev-***-trivy  
   -- kubernetes:core/v1:Secret ***/***-dev-***-registry delete original 
   +- kubernetes:core/v1:Secret ***/***-dev-***-registry replace [diff: ~data]
   ++ kubernetes:core/v1:Secret ***/***-dev-***-registry create replacement [diff: ~data]
      kubernetes:core/v1:ConfigMap ***/***-dev-***-core  
   -- kubernetes:core/v1:Secret ***/***-dev-***-core delete original 
   +- kubernetes:core/v1:Secret ***/***-dev-***-core replace [diff: ~data]
   ++ kubernetes:core/v1:Secret ***/***-dev-***-core create replacement [diff: ~data]
the
-dev-***-trivy
is the only
Secret
that remains in the cluster
in a follow up job to perform the refresh, I found these logs:
Copy code
~  kubernetes:core/v1:Secret ***/***-dev-***-jobservice refreshing (0s) 
  
      kubernetes:core/v1:Secret ***/***-dev-***-jobservice  (0.00s) 
  
   ~  kubernetes:core/v1:ConfigMap monitoring/prometheus-server refreshing (0s) 
  
      kubernetes:core/v1:ConfigMap monitoring/prometheus-server  (0.00s) 
  
   ~  kubernetes:core/v1:Secret ***/***-dev-***-core refreshing (0s) 
      kubernetes:core/v1:Secret ***/***-dev-***-core  (0.00s) 
  
   ~  kubernetes:core/v1:Secret ***/***-dev-***-registry refreshing (0s) 
  
      kubernetes:core/v1:Secret ***/***-dev-***-registry  (0.00s)
and these lower in the same logs:
Copy code
@ Refreshing....
  
      kubernetes:core/v1:ConfigMap ***/***-dev-***-registry  (8s) 
  
      kubernetes:core/v1:Secret ***/***-dev-***-trivy  (9s) 
  
      kubernetes:core/v1:ConfigMap ***/***-dev-***-jobservice-env  (9s)
b
if I’m understanding correctly, the secret is “disappearing” from the cluster?
s
more like it disappeared, I don’t know exactly when, and running pulumi doesn’t bring it back up and pulumi itself gets stuck and “fails” b/c the pods relying on the secret can never finish creating and timeout
b
okay so this is (likely) because when you’re deploying a helm release. The release itself hasn’t changed, so therefore Pulumi think it’s fine. The helm release itself manages the resource
s
does it? I thought pulumi renders the templates using helm but then applies it using equivalent of
kubectl apply
how do I resolve it?
b
sorry are you using
helm.Release
or
helm.Chart
?
if it’s the latter you’re right, it templates and I’m wrong
s
helm.Chart
afaik they both do that
got it
helm.Release
is delegating to the helm binary
@billowy-army-68599 do you think if I convert the
helm.Chart
to a
helm.Release
in my IaC it will redeploy everything?
the db is external as well as using s3 buckets so those can still be referenced and have no state changes