Currently this is the second biggest write query changing more than 1000 rows. In one case I checked it removed > 19,000 in query which can easily make wikis go read-only. This needs batching.
https://logstash.wikimedia.org/goto/f4c2e73ced8a93ada202f981118284aa
Description
Description
Details
Details
Subject | Repo | Branch | Lines +/- | |
---|---|---|---|---|
Use primary key and limit for purge of expired blocks | mediawiki/extensions/GlobalBlocking | master | +29 -9 |
Status | Subtype | Assigned | Task | ||
---|---|---|---|---|---|
Open | None | T301742 Increase log level of rowsAffected > 1000 on database writes in transaction profiler to warning or error | |||
Resolved | Umherirrender | T301641 GlobalBlocking purge expired must have a limit |
Event Timeline
Comment Actions
Change 762134 had a related patch set uploaded (by Umherirrender; author: Umherirrender):
[mediawiki/extensions/GlobalBlocking@master] Use primary key and limit for purge of expired blocks
Comment Actions
If there are so much row to delete, that could mean the current intervall of deletion is too less.
Maybe a maintenance script is needed for that purpose. Or the deletion needs to be done in a job queue to retrigger the job if there are still rows to delete. But that should be discussed separated.
Comment Actions
Change 762134 merged by jenkins-bot:
[mediawiki/extensions/GlobalBlocking@master] Use primary key and limit for purge of expired blocks
Comment Actions
For the sake of documentation. It's clean now: https://logstash.wikimedia.org/goto/886e1f77fd095e095272d7cfc79fd795