Page MenuHomePhabricator

Pages with large galleries on uk.wikipedia.org (9000+ files) timeout instead of failing for explicit complexity limits
Closed, DuplicatePublicPRODUCTION ERROR

Description

Error
  • mwversion: 1.36.0-wmf.33
  • reqId: YEHnR9WbID9AV25HlJKXzwAAAEw
normalized_message
WMFTimeoutException
exception.trace
from /srv/mediawiki/wmf-config/set-time-limit.php(41)
#0 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/DatabaseMysqli.php(46): {closure}(integer)
#1 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/Database.php(1380): Wikimedia\Rdbms\DatabaseMysqli->doQuery(string)
#2 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/Database.php(1298): Wikimedia\Rdbms\Database->executeQueryAttempt(string, string, boolean, string, integer)
#3 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/Database.php(1227): Wikimedia\Rdbms\Database->executeQuery(string, string, integer)
#4 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/Database.php(1913): Wikimedia\Rdbms\Database->query(string, string, integer)
#5 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/Database.php(2013): Wikimedia\Rdbms\Database->select(array, array, array, string, array, array)
#6 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/DBConnRef.php(68): Wikimedia\Rdbms\Database->selectRow(array, array, array, string, array, array)
#7 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/rdbms/database/DBConnRef.php(331): Wikimedia\Rdbms\DBConnRef->__call(string, array)
#8 /srv/mediawiki/php-1.36.0-wmf.33/includes/filerepo/file/LocalFile.php(463): Wikimedia\Rdbms\DBConnRef->selectRow(array, array, array, string, array, array)
#9 /srv/mediawiki/php-1.36.0-wmf.33/includes/filerepo/file/LocalFile.php(327): LocalFile->loadFromDB(integer)
#10 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/objectcache/wancache/WANObjectCache.php(1604): LocalFile->{closure}(boolean, integer, array, NULL, array)
#11 /srv/mediawiki/php-1.36.0-wmf.33/includes/libs/objectcache/wancache/WANObjectCache.php(1432): WANObjectCache->fetchOrRegenerate(string, integer, Closure, array, array)
#12 /srv/mediawiki/php-1.36.0-wmf.33/includes/filerepo/file/LocalFile.php(360): WANObjectCache->getWithSetCallback(string, integer, Closure, array)
#13 /srv/mediawiki/php-1.36.0-wmf.33/includes/filerepo/file/LocalFile.php(651): LocalFile->loadFromCache()
#14 /srv/mediawiki/php-1.36.0-wmf.33/includes/filerepo/FileRepo.php(475): LocalFile->load(integer)
#15 /srv/mediawiki/php-1.36.0-wmf.33/includes/filerepo/RepoGroup.php(156): FileRepo->findFile(Title, array)
#16 /srv/mediawiki/php-1.36.0-wmf.33/includes/parser/Parser.php(3795): RepoGroup->findFile(Title, array)
#17 /srv/mediawiki/php-1.36.0-wmf.33/includes/parser/Parser.php(3763): Parser->fetchFileNoRegister(Title, array)
#18 /srv/mediawiki/php-1.36.0-wmf.33/includes/gallery/TraditionalImageGallery.php(96): Parser->fetchFileAndTitle(Title, array)
#19 /srv/mediawiki/php-1.36.0-wmf.33/includes/parser/Parser.php(5151): TraditionalImageGallery->toHTML()
#20 /srv/mediawiki/php-1.36.0-wmf.33/includes/parser/CoreTagHooks.php(161): Parser->renderImageGallery(string, array)
#21 /srv/mediawiki/php-1.36.0-wmf.33/includes/parser/Parser.php(3969): CoreTagHooks::gallery(string, array, Parser, PPFrame_Hash)
Impact

All pages under these WLM pages are permanent timeout since 2021-03-03, pages are huge (above 1MB) and _probably_ contain extreme amounts of images. Accessing them timeouts, editing them as well.

Notes

It was found by CommonsDelinker but which tried to remove images there and got the timeouts. (Had to blacklist it since made the bot completely stuck.)

Details

Request ID
YEHnR9WbID9AV25HlJKXzwAAAEw
Request URL
https://uk.wikipedia.org/wiki/Вікіпедія:Wiki_Loves_Monuments/Київ/Голосіївський
Stack Trace

Event Timeline

The cascading stacktraces in Logstash (wiki:ukwiki AND "%D0%9A%D0%B8%D1%97%D0%B2%2F%D0%93%D0%BE%D0%BB%D0%BE%D1%81%D1%96%D1%97%D0%B2%D1%81%D1%8C%D0%BA%D0%B8%D0%B9" after enabling timeouts on the mediawiki-errors dashboard) seem to point to Parsoid as the first item:

from /srv/mediawiki/wmf-config/set-time-limit.php(41)
#0 /srv/mediawiki/php-1.36.0-wmf.33/vendor/wikimedia/parsoid/src/Utils/DOMUtils.php(82): {closure}(integer)
#1 /srv/mediawiki/php-1.36.0-wmf.33/vendor/wikimedia/parsoid/src/Ext/ParsoidExtensionAPI.php(540): Wikimedia\Parsoid\Utils\DOMUtils::migrateChildren(DOMElement, DOMElement)
#2 /srv/mediawiki/php-1.36.0-wmf.33/vendor/wikimedia/parsoid/src/Ext/Gallery/TraditionalMode.php(205): Wikimedia\Parsoid\Ext\ParsoidExtensionAPI::migrateChildrenAndTransferWrapperDataAttribs(DOMElement, DOMElement)
#3 /srv/mediawiki/php-1.36.0-wmf.33/vendor/wikimedia/parsoid/src/Ext/Gallery/TraditionalMode.php(226): Wikimedia\Parsoid\Ext\Gallery\TraditionalMode->line(Wikimedia\Parsoid\Ext\Gallery\Opts, DOMElement, Wikimedia\Parsoid\Ext\Gallery\ParsedLine)
Aklapper set Phatality ID to 36d06658889351388ef55f9d479e2b5996df8f252eeecfdb32d66e7ad9c23e45.Mar 5 2021, 10:05 AM

It has almost nine thousand files... edit attempts also timeout, at a glance during updating the global usage table. If the page would load, displaying it would probably DoS the thumbnailer as well. I think the only software-side issue here is failing to enforce some kind of sane limit on how many thumbnails a page can contain.

This is not just Parsoid. @Tgr is right - this is primarily a case of setting and enforcing limits. But for now, unless MediaWiki decides to implement infinite scrolling and changes how these kinds of gallery pages are handled (both of which are going to be major changes), this page should be split up on ukwiki.

The action item for MediaWiki-Parser and Parsoid here is to look at image / gallery limits and tweak and enforce them.

this page should be split up on ukwiki.

Technically nobody can do about it anything, split or else, since it timeouts.

this page should be split up on ukwiki.

Technically nobody can do about it anything, split or else, since it timeouts.

https://uk.wikipedia.org/wiki/%D0%92%D1%96%D0%BA%D1%96%D0%BF%D0%B5%D0%B4%D1%96%D1%8F:Wiki_Loves_Monuments/%D0%9A%D0%B8%D1%97%D0%B2/%D0%93%D0%BE%D0%BB%D0%BE%D1%81%D1%96%D1%97%D0%B2%D1%81%D1%8C%D0%BA%D0%B8%D0%B9?action=edit will open. So, you can cut some section of that wikitext and save. If the size is reduced, it won't time out now. You can then paste the cut wikitext into a different page.

@ssastry do you have a recommendation for max images per article? (Or max galleries, not sure what the bottleneck is here.) That might help the ukwiki admin who is trying to clean this up.

It has almost nine thousand files... edit attempts also timeout, at a glance during updating the global usage table. If the page would load, displaying it would probably DoS the thumbnailer as well. I think the only software-side issue here is failing to enforce some kind of sane limit on how many thumbnails a page can contain.

The thumbnailer is called async per image request. It is possible when using a new size for all images that the thumbnailer has something to do, but that is ratelimited and you have to reload the page many times to create all the missing thumbs in the needed sizes, when there are not already prepared.

The generation of the html should not timeout when building a gallery.

The api edit may help as well, because it does not need to load the html.

Krinkle renamed this task from Permanent timeout accessing large galleries on uk.wikipedia.org to Pages with large galleries on uk.wikipedia.org (9000+ files) timeout instead of failing for explicit complexity limits.EditedApr 1 2021, 8:50 PM
Krinkle subscribed.

Untagging as prod error, but renaming as a Parsing issue to address. This is probably complex and high enough to quality for some kind of deterministic limit instead of something racy that sometimes works and sometimes not. Especially problematic since these pages tend to "come back to life" from time to time after a cascading update from the jobqueue, but then become uneditable again after that.

See also T254522: Set appropriate wikitext limits for Parsoid to ensure it doesn't OOM