brash-hairdresser-60389
03/02/2023, 3:41 PMstrategy: {
rollingUpdate: { maxUnavailable: '75%', maxSurge: '75%' },
},
Pulumi deletes the resources, in this case, all the 3 replicas, and then recreates the 3 replicas. I also tried to use the parameter deleteBeforeReplace: false,
but nothing changed.
The problem is annoying in production since the alerting system always notifies of an issue because the liveness probe results in a HTTP 503 error.
Any advice?
Many thanks for considering my request.billowy-army-68599
03/02/2023, 3:47 PMup
what happens? does it recreate the task definition?brash-hairdresser-60389
03/02/2023, 3:58 PMup
in cd pipeline,
in the up output the changes are always generating a replacement
-- kubernetes:apps/v1:Deployment app1_appDeployment deleting original (0s)
...
-- kubernetes:apps/v1:Deployment app1_appDeployment deleted original (0.96s)
The strange behavior is that below in the log it output that is trying a replacement
++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s)
++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) [1/2] Waiting for app ReplicaSet be marked available
++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) warning: [MinimumReplicasUnavailable] Deployment does not have minimum availability.
++ kubernetes:apps/v1:Deployment app1_appDeployment creating replacement (0s) [1/2] Waiting for app ReplicaSet be marked available (0/3 Pods available)
Then after there is a warning on the initContainer: containers with implemete statusbillowy-army-68599
03/02/2023, 3:59 PMbrash-hairdresser-60389
03/02/2023, 4:02 PMbillowy-army-68599
03/02/2023, 4:56 PMbrash-hairdresser-60389
03/02/2023, 4:58 PMsalmon-gold-74709
03/02/2023, 6:23 PM