It could be interesting to have some "realworld" testing on the new kubernetes cluster in toolsbeta before starting the final migration in the tools project.
The toolsbeta cluster is currently scaled to 3 control nodes, 2 worker nodes, 3 etcd servers, 1 haproxy, 1 front proxy (dynamicproxy).
Some ideas and questions I would like to see answered:
* how many request can handle the north-south proxy setup? i.e, front proxy (dynamicproxy) + haproxy + ingress
* how many pods can we run in just a couple worker nodes? how oversubscriving mem and CPU works in this new cluster?
* how is nginx-ingress behaving when hundred of ingress objects are being created/removed?
* what happens when we scale up/down the cluster? Is service interrupted in any way? Specifically when adding / removing control and worker nodes.
* the haproxy setup is not HA. We have cold standby server. How long (how bad) is the service interrupted in case of failover?
* the frontproxy (dynamicproxy) setup is not HAT. We have a cold standby server. How long (how bad) is the service interrupted in case of failover?
* estimate/test the service impact of relocating a VM to a different cloudvirt (etcd, worker, control, haproxy, frontproxy)