# kubernetes


03/26/2021, 1:25 PM
if they're connected to a deployment or stateset you could in principle scale it down to 0 and then back up again.. but that's a bit wonky. I might be using that in combination with automation for tasks related to scaling up persistent volume claims


03/26/2021, 1:36 PM
You could do that, but you'd have to do multiple deploys which yes might be a bit wonky and would mean downtime. If you have multiple replicas and run the command I suggested it'll do a nice rollout and you shouldn't end up with any downtime


03/26/2021, 3:26 PM
yup - the problem with growing volumes is that they get into a state requiring that the pod is restarted 😉 in my experience on AKS as well, it's not as simple as just killing the pod so that it pops up again, it needs to go away "for a little while" (around 10 seconds or so has been my experience) before it's back up. the ideal flow with a stateful set that contains more than one instance then is to scale it down, scale it up so that at least one pod gets its volume, then repeatedly hammer the rest with kill signals 😕
but yeah - as you said - downtime 😄