We need to define the ideal size of limits and resources for a single pod running mediawiki. Specifically, we need to define the following limits:
|container| memory | cores|
| httpd | | |
| php | | |
|mcrouter | | |
|nutcracker | | |
|mcrouter (dset) | | |
| nutcracker (dset) | | |
I added two separate lines for both mcrouter and nutcracker for the two cases - having them as part of a pod and as daemonsets.
I have some basic numbers for the php image. Most of these are a function of the number of php workers we're going to run in the pod.
- opcache doesn't depend on the size of the pod. We need to reserve ~ 400 MB of memory for opcache (and keep an eye out for it)
- APCu space. Currently an appserver uses ~ 1.5 G of opcache and an api server uses ~ 400MB of it. We might expect this to be a bit smaller for a smaller installation, but not as much as we'd like.
- Each worker will need ~ 500 MB of memory available (more for parsoid servers)
- d_f * CPU/2 per worker, where d_f is a dumping factor that I would empirically set at 0.6
- We always need to add 2 workers to support liveness /status probes
So we have a relatively simple equation to play with:
CPU(n_workers) = d_f * (n_workers -2) / 2
MEM(n_workers) = opcache + apcu + mem_limit * n_workers
The goal is to pack 4 or even 5 pods in a single modern node.