The abstracts dump is quite is large for some sites; e.g. enwiki has 5G, which would be only 660M with gzip compression. Similarly, wikidatawiki abstracts are 59GB now, but only 4.1G gzipped.
Compression would also be nice even for small files such as the siteinfo-namespaces dump because we could then easily distinguish between status files (html/json/txt) and dump content (gz/bz2/7z) without a hardcoded list.