https://pulumi.com logo
Title
b

brash-hairdresser-60389

03/02/2023, 3:41 PM
Hello, community; it has been a long time since I have had an issue with applying deployments updates. When I update a resource, even if the strategy is the one as default, or even if I specify RollingUpdate like the following snippet
strategy: {
            rollingUpdate: { maxUnavailable: '75%', maxSurge: '75%' },
          },
Pulumi deletes the resources, in this case, all the 3 replicas, and then recreates the 3 replicas. I also tried to use the parameter
deleteBeforeReplace: false,
but nothing changed. The problem is annoying in production since the alerting system always notifies of an issue because the liveness probe results in a HTTP 503 error. Any advice? Many thanks for considering my request.
b

billowy-army-68599

03/02/2023, 3:47 PM
when you run an
up
what happens? does it recreate the task definition?
b

brash-hairdresser-60389

03/02/2023, 3:58 PM
I’m running the
up
in cd pipeline, in the up output the changes are always generating a replacement
-- kubernetes:apps/v1:Deployment app1_appDeployment deleting original (0s) 
...
   -- kubernetes:apps/v1:Deployment app1_appDeployment deleted original (0.96s)
The strange behavior is that below in the log it output that is trying a replacement
++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) 
   ++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) [1/2] Waiting for app ReplicaSet be marked available
  
   ++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) warning: [MinimumReplicasUnavailable] Deployment does not have minimum availability.
  
   ++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) [1/2] Waiting for app ReplicaSet be marked available (0/3 Pods available)
Then after there is a warning on the initContainer: containers with implemete status
b

billowy-army-68599

03/02/2023, 3:59 PM
oh, it’s Kubernetes
anything in the replicaset logs?
b

brash-hairdresser-60389

03/02/2023, 4:02 PM
There is also another resource that is affected by changes but is replaced, not destroyed and recreated
do you think is a Kubernetes issue @billowy-army-68599?
b

billowy-army-68599

03/02/2023, 4:56 PM
it’s really hard to say I’m afraid, a lot of complexity there
b

brash-hairdresser-60389

03/02/2023, 4:58 PM
ok thanks, so there isn’t anything I can do to understand why pulumi up is always destroying the resource? I had a look at the replicaset logs, I can say also the replicaset is always deleted and recreated
s

salmon-gold-74709

03/02/2023, 6:23 PM
This sort of K8s error means the deployment is failing - the K8s replicaset controller (I think) is the piece that manages this, and stops deploying new replicas until a new deployment is pushed
so the issue is in the deployment or maybe the containerised app - often it's not staying up long enough to be considered healthy, allowing next replica to be deployed
this deployment troubleshooting guide may help https://learnk8s.io/troubleshooting-deployments