This message was deleted.
# kubernetes
s
This message was deleted.
s
in a follow up job to perform the refresh, I found these logs:
Copy code
~  kubernetes:core/v1:Secret ***/***-dev-***-jobservice refreshing (0s) 
  
      kubernetes:core/v1:Secret ***/***-dev-***-jobservice  (0.00s) 
  
   ~  kubernetes:core/v1:ConfigMap monitoring/prometheus-server refreshing (0s) 
  
      kubernetes:core/v1:ConfigMap monitoring/prometheus-server  (0.00s) 
  
   ~  kubernetes:core/v1:Secret ***/***-dev-***-core refreshing (0s) 
      kubernetes:core/v1:Secret ***/***-dev-***-core  (0.00s) 
  
   ~  kubernetes:core/v1:Secret ***/***-dev-***-registry refreshing (0s) 
  
      kubernetes:core/v1:Secret ***/***-dev-***-registry  (0.00s)
and these lower in the same logs:
Copy code
@ Refreshing....
  
      kubernetes:core/v1:ConfigMap ***/***-dev-***-registry  (8s) 
  
      kubernetes:core/v1:Secret ***/***-dev-***-trivy  (9s) 
  
      kubernetes:core/v1:ConfigMap ***/***-dev-***-jobservice-env  (9s)
b
if I’m understanding correctly, the secret is “disappearing” from the cluster?
s
more like it disappeared, I don’t know exactly when, and running pulumi doesn’t bring it back up and pulumi itself gets stuck and “fails” b/c the pods relying on the secret can never finish creating and timeout
b
okay so this is (likely) because when you’re deploying a helm release. The release itself hasn’t changed, so therefore Pulumi think it’s fine. The helm release itself manages the resource
s
does it? I thought pulumi renders the templates using helm but then applies it using equivalent of
kubectl apply
how do I resolve it?
b
sorry are you using
helm.Release
or
helm.Chart
?
if it’s the latter you’re right, it templates and I’m wrong
s
helm.Chart
afaik they both do that
got it
helm.Release
is delegating to the helm binary
@billowy-army-68599 do you think if I convert the
helm.Chart
to a
helm.Release
in my IaC it will redeploy everything?
the db is external as well as using s3 buckets so those can still be referenced and have no state changes
@billowy-army-68599 something I noticed troubleshooting a failing Ingress resource is that the resulting YAML rendered from a
helm.v3.Chart
does not take into account the version of k8s of the cluster since the template in the Chart I have has different rendering depending on
.Capabilities
based on available API resource versions. Is there a way either through the
kubernetes.Provider
or
helm.v3.Chart
to tell the command what version of k8s to use for rendering?