Hello, community; it has been a long time since I ...
# kubernetes
b
Hello, community; it has been a long time since I have had an issue with applying deployments updates. When I update a resource, even if the strategy is the one as default, or even if I specify RollingUpdate like the following snippet
Copy code
strategy: {
            rollingUpdate: { maxUnavailable: '75%', maxSurge: '75%' },
          },
Pulumi deletes the resources, in this case, all the 3 replicas, and then recreates the 3 replicas. I also tried to use the parameter
deleteBeforeReplace: false,
but nothing changed. The problem is annoying in production since the alerting system always notifies of an issue because the liveness probe results in a HTTP 503 error. Any advice? Many thanks for considering my request.
b
when you run an
up
what happens? does it recreate the task definition?
b
I’m running the
up
in cd pipeline, in the up output the changes are always generating a replacement
Copy code
-- kubernetes:apps/v1:Deployment app1_appDeployment deleting original (0s) 
...
   -- kubernetes:apps/v1:Deployment app1_appDeployment deleted original (0.96s)
The strange behavior is that below in the log it output that is trying a replacement
Copy code
++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) 
   ++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) [1/2] Waiting for app ReplicaSet be marked available
  
   ++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) warning: [MinimumReplicasUnavailable] Deployment does not have minimum availability.
  
   ++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) [1/2] Waiting for app ReplicaSet be marked available (0/3 Pods available)
Then after there is a warning on the initContainer: containers with implemete status
b
oh, it’s Kubernetes
anything in the replicaset logs?
b
There is also another resource that is affected by changes but is replaced, not destroyed and recreated
do you think is a Kubernetes issue @billowy-army-68599?
b
it’s really hard to say I’m afraid, a lot of complexity there
b
ok thanks, so there isn’t anything I can do to understand why pulumi up is always destroying the resource? I had a look at the replicaset logs, I can say also the replicaset is always deleted and recreated
s
This sort of K8s error means the deployment is failing - the K8s replicaset controller (I think) is the piece that manages this, and stops deploying new replicas until a new deployment is pushed
so the issue is in the deployment or maybe the containerised app - often it's not staying up long enough to be considered healthy, allowing next replica to be deployed
this deployment troubleshooting guide may help https://learnk8s.io/troubleshooting-deployments