We're encountering issues deploying low-replica releases (canary and mw-debug) of mw-on-k8s.
0/22 nodes are available: 16 Insufficient cpu, 2 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {dedicated: kask}, that the pod didn't tolerate.
This is due to the sum of requests from the deployments on wikikube being over the number of available CPU.
This blocks T342748: mw-on-k8s app container CPU throttling at low average load remediation which raises requests for mw-on-k8s releases.
It has been emergency-mitigated by artificially lowering the requests for canary releases and the mw-debug deployment in https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/944229
This is not a permanent solution however, and in the absence of T264625: Deploy kube-state-metrics to have more precise data, and T342533: Q1:rack/setup/install kubernetes10[27-56] to have new hardware, we should remediate by re-imaging a few servers from the appserver cluster to kubernetes workers.
Currently, the eqiad wikikube cluster has 15 more pods than codfw, so a couple nodes extra for it compared to codfw seems justified.