Hello everyone, I am managing my production stack ...
# aws
c
Hello everyone, I am managing my production stack with pulumi and from time to time when I make changes to my infra stack all nodes get's replaced which puts down all our services here below the stack update that restarts the nodes
Copy code
eks:index:Cluster$aws:ec2/launchConfiguration:LaunchConfiguration (x-cluster-nodeLaunchConfiguration)
+- aws:ec2/launchConfiguration:LaunchConfiguration (replace)
I am trying to find out how to fix this I made a little investigation and I think that sometimes one of the nodes get terminated for some reason and replaced by the autoscaling group by a new node (ec2 instance) and then during the following deployment for sure the node launch config needs to get updated to reflect with the new live instances. But I don't want to have a downtime on my services due to this Thank you Salah
v
I think you can set the rolling update policy on the launch config to achieve 0 downtime
ah, that might just be launch templates actually, my bad