Before/while we open this to the public we want to ramp up the pool like the current compute one
Description
Description
Event Timeline
Comment Actions
Mentioned in SAL (#wikimedia-cloud) [2021-04-27T10:48:55Z] <dcaro> ceph.eqiad: Tweaked the target_size_ratio of all the pools, enabling autoscaler (it will increase cinder pool only) (T273783)
Comment Actions
Current autoscale setup (notice the target ratios and the new pg_num):
dcaro@cloudcephmon1001:~$ sudo ceph osd pool autoscale-status POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE eqiad1-cinder 3329G 3.0 209.5T 0.0466 0.2000 0.2000 1.0 128 1024 on eqiad1-glance-images 209.9G 3.0 209.5T 0.0029 0.1000 0.1000 1.0 1024 on eqiad1-compute 46157G 3.0 209.5T 0.6454 0.7000 0.7000 1.0 4096 on
Comment Actions
Mentioned in SAL (#wikimedia-cloud) [2021-04-27T10:51:42Z] <dcaro> ceph.eqiad: cinder pool got it's pg_num increased to 1024, re-shuffle started (T273783)
Comment Actions
Done, everything rebalanced, now the balancer is able to do a better job, and the score (ceph balancer eval) went down
from 0.008 to 0.0047, moving the busiest osd from 77% to 70.83% (10% of space reclaimed! \o/).
It should have some positive effect also on the performance of ceph volumes.