Wikireplicas hosts are approaching 90% usage on `/srv`:
```
===== NODE GROUP =====
(1) labsdb1010.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 12T 10T 1.7T 87% /srv
===== NODE GROUP =====
(1) labsdb1009.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 12T 10T 1.7T 86% /srv
===== NODE GROUP =====
(1) labsdb1012.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 14T 11T 3.9T 73% /srv
===== NODE GROUP =====
(1) labsdb1011.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 12T 11T 1.6T 87% /srv
```
I just did a quick check on enwiki and wikidata and there are indeed tables that need to be compressed (specially those that are temporary but also quite big), so I assume it is the case for most of the wikis
{P8515}
I think at the same time we we can just defragment all the tables, as I think they've not been defragmented since these hosts were set up around 2 years ago
Status:
[x] labsdb1009
* Last compressed table: new file `/home/jynus/labsdb1009_tables_to_compress.txt` was generated from a fresh new query (2019-07-19)
[] labsdb1010
* Last compressed table: new file `/home/marostegui/labsdb1010_non_compressed_tables.txt` was generated from a fresh new query (2019-07-17)
[x] labsdb1011
* T222978#5305810
[x] labsdb1012
* T222978#5246257