The CI jobs for MediaWiki initialize a dummy database which is held in a tmpfs. Eventually the job can crash when the database exceeds the size of the tmpfs (T217654).
This task aims at:
- reducing the disk foot print for the initialized database
- add tweaks to speed up db interactions
- find an acceptable size for the tmpfs (currently 320MBytes)
The CI jobs for MediaWiki uses the Quibble test runner. It is run in either a Jessie or a Stretch Docker container:
Container | mariadb-server package version |
---|---|
docker-registry.wikimedia.org/releng/quibble-stretch | 10.1.37-0+deb9u1 |
docker-registry.wikimedia.org/releng/quibble-jessie | 10.0.36-0+deb8u1 |
The configuration is mostly the stock one, albeit we change:
[client] # Stretch defaults to utf8mb4. T193222 default-character-set = binary [mysqld] # Stretch defaults to utf8mb4. T193222 character_set_server = binary character_set_filesystem = binary collation_server = binary
Quibble creates the database using mysql_install_db:
mkdir -p /tmp/dummydb mysql_install_db --datadir=/tmp/dummydb --user="$USER"
Then after install:
db file | Size |
---|---|
ibdata1 | 12 MBytes |
ib_logfile0 | 48 MBytes |
ib_logfile1 | 48 MBytes |
Total | 108 MBytes |
We should be able to reduce that initial foot print.
The database is held in a tmpfs since that dramatically speed up MediaWiki tests hitting the database. I assume the default settings come with safety in mind, however on CI we can afford to loose data in case of a database crash, and can surely be less paranoid on how the transactions/buffers are written to disk.
I have been suggested:
- reduce the extent size
- disable the double write buffer
- disable binlogs
- a separate innodb-undo-tablespaces to avoid growing ibdata1
- using innodb file per table
- compression (would be nice for the huge l10n_cache table