Page MenuHomePhabricator

Optimize MySQL settings for MediaWiki CI / Quibble
Open, LowPublic


The CI jobs for MediaWiki initialize a dummy database which is held in a tmpfs. Eventually the job can crash when the database exceeds the size of the tmpfs (T217654).

This task aims at:

  • reducing the disk foot print for the initialized database
  • add tweaks to speed up db interactions
  • find an acceptable size for the tmpfs (currently 320MBytes)

The CI jobs for MediaWiki uses the Quibble test runner. It is run in either a Jessie or a Stretch Docker container:

Containermariadb-server package version

The configuration is mostly the stock one, albeit we change:

# Stretch defaults to utf8mb4. T193222
default-character-set = binary

# Stretch defaults to utf8mb4. T193222
character_set_server     = binary
character_set_filesystem = binary
collation_server         = binary

Quibble creates the database using mysql_install_db:

mkdir -p /tmp/dummydb
mysql_install_db --datadir=/tmp/dummydb --user="$USER"

Then after install:

db fileSize
ibdata112 MBytes
ib_logfile048 MBytes
ib_logfile148 MBytes
Total108 MBytes

We should be able to reduce that initial foot print.

The database is held in a tmpfs since that dramatically speed up MediaWiki tests hitting the database. I assume the default settings come with safety in mind, however on CI we can afford to loose data in case of a database crash, and can surely be less paranoid on how the transactions/buffers are written to disk.

I have been suggested:

  • reduce the extent size
  • disable the double write buffer
  • disable binlogs
  • a separate innodb-undo-tablespaces to avoid growing ibdata1
  • using innodb file per table
  • compression (would be nice for the huge l10n_cache table

Event Timeline

hashar triaged this task as Low priority.Mar 13 2019, 10:30 AM
hashar created this task.

Finding some nicer settings for the CI database configuration is a nice to have, that can happens over the course of a few weeks. Hence setting to low priority.

One thing that would also speed up the thing, among many other ideas, would be to not run mysql_install_db every time, but having a prepared empty data dir that is copied every time. The downside would be that it would have to be rebuilt on each mysql package upgrade.

In general we would need some testing to see how low we can go without affecting the actual performance of the CI tests.

the huge l10n_cache table

This can also be put on disk instead, e.g. using php array files (preferred), or cdb. Those also have the benefit of generating faster, which makes the install segment of the Jenkins build faster as well.

This would also have the benefit of more closely matching beta/production, which also use LCStore on disk. We'd configure it in CI (somehow) with $wgLocalisationCacheConf['store'] = 'array';, this also needs $wgCacheDirectory to be set, e.g. to /tmp.

Marostegui moved this task from Triage to Backlog on the DBA board.Mar 15 2019, 6:58 AM
Krinkle updated the task description. (Show Details)Mar 15 2019, 10:46 PM
Krinkle awarded a token.