Advertisement

K8s scaling (within the rancher UI)

K8s scaling (within the rancher UI) Odd way rancher scales a Kubernetes workload.

We are scaling from 60 to 3 and back to 60 pods. On scaling down we create the 3 pods that will remain first then delete the original 60.

When scaling back up to 60 we create 25% too many pods, reaching 75 at a couple points in the process before finally settling into the desired size.

It's obvious that rancher will add some overhead to K8s but I don't (yet) understand why scaling workloads is so much different than using kubectl as seen here:

rancher

Post a Comment

0 Comments