I’m seeing some “bad” behavior when I’m deploying ...
# kubernetes
g
I’m seeing some “bad” behavior when I’m deploying a
deployment
whose replica count is controlled by an HPA. Basically when a
deployment
change happens, it will scale the replica set behind the deployment to
1
replica, destroying what the HPA has set. I originally had
replicas: 1
in the deployment spec, but I removed that and I still see this behavior. Anyone else experience this and is there a way to fix it?
I think this turned out to still happen when pulumi was “removing” the replica count from the deployment. Further deployments with the replica count removed doesn’t cause this.
b
you can set ignoreChanges in the pulumi customresourceoptions
Copy code
new x(name, { opts }, { ignoreChanges: 'spec.template.replicas' });
something like that IIRC
g
yeah, it’s just strange that this happens in the first place. it’s like pulumi completely takes over the replica sets to do it’s own thing. When applying raw yaml with this field set to
1
, this behavior isn’t experienced.