Wikireplicas hosts are approaching 90% usage on `/srv`:
```
===== NODE GROUP =====
(1) labsdb1010.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 12T 10T 1.7T 87% /srv
===== NODE GROUP =====
(1) labsdb1009.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 12T 10T 1.7T 86% /srv
===== NODE GROUP =====
(1) labsdb1012.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 14T 11T 3.9T 73% /srv
===== NODE GROUP =====
(1) labsdb1011.eqiad.wmnet
----- OUTPUT of 'df -hT /srv' -----
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/tank-data xfs 12T 11T 1.6T 87% /srv
```
I just did a quick check on enwiki and wikidata and there are indeed tables that need to be compressed (specially those that are temporary but also quite big), so I assume it is the case for most of the wikis
{P8515}
I think at the same time we we can just defragment all the tables, as I think they've not been defragmented since these hosts were set up around 2 years ago
Status:
[] labsdb1009
* Last compressed table: new file `/home/jynus/labsdb1009_tables_to_compress.txt` was generated from a fresh new query (2019-05-28)
[] labsdb1010
* Just a few individual tables have been compressed. The following file needs to be started from the beginning: `/home/marostegui/labsdb1010_non_compressed_tables.txt`
[] labsdb1011
* Last compressed table: `enwiki.content_models` based on the file `/home/marostegui/labsdb1011_non_compressed_tables.txt`, so next iteration needs `-n +11828`
[] labsdb1012
* Last compressed table: `nlwiktionary.protected_titles` based on `/home/marostegui/labsdb1012_non_compressed_tables.txt`, so next iteration needs `-n +33539`