While adding the new nodes for T244791: Scale up 2020 Kubernetes cluster for final migration of legacy cluster workloads I noticed that an "empty" worker has about 10% of available CPU and 13% of available RAM consumed by the calico, kube-proxy, and cadvisor pods. This feels like a lot of overhead on each worker for an "idle" state.
- Calico pods are requesting 250m CPU with no explicit RAM request and no explicit limit on CPU or RAM.
- Kube-proxy pods have no explicit Request or Limit values in the Pod template.
- Cadvisor pods request 150m CPU and 200Mi RAM with 300m CPU and 2000Mi RAM limits.
Can any of these Request values be tuned downward? Can reasonable Limit values be set for everything?