Page MenuHomePhabricator

Special:ShortPages does not load in Wikimedia Commons: "Read timeout is reached"
Closed, ResolvedPublicPRODUCTION ERROR

Description

For about the past week I have been unable to load https://commons.wikimedia.org/wiki/Special:ShortPages . After a long time of loading, it generates an error message:

Database error - A database query error has occurred. This may indicate a bug in the software.[WULq@wpAAEUAAWDwK@MAAAAX] 2017-06-15 20:16:56: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
Aklapper renamed this task from Special:ShortPages does not load in Wikimedia Commons to Special:ShortPages does not load in Wikimedia Commons: "Read timeout is reached".Jun 15 2017, 9:09 PM

Thanks for reporting this! Going to https://commons.wikimedia.org/wiki/Special:ShortPages I got
[WUL1oApAMFQAAGIq7MQAAAAQ] 2017-06-15 21:02:20: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"

Function: ShortPagesPage::reallyDoQuery
Error: 2062 Read timeout is reached (10.64.48.19)

Josve05a subscribed.

Re-adding Commons, since his only seems to be an issue on Commons, and is of interest to the Commons community to track the status of.
/Commons-admin

Marostegui triaged this task as Unbreak Now! priority.EditedJun 16 2017, 7:38 AM
Marostegui added subscribers: Anomie, Marostegui.

IGNORE this comment - this is a different issue Just jump to for the root cause of this task: T168010#3354340

This is indeed broken - did anything change regarding that query?
The query being executed is:

SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges`    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '6' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201  ;

Which timesout on db1059 (api db)
Executing that query manually takes:

201 rows in set (2 min 34.85 sec)

The rc_timestamp index looks like the best of the indexes for that query indeed.

root@db1059[commonswiki]> explain SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges`    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '6' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201  ;
+------+-------------+---------------+-------+---------------------------------------------------------------------------------------+--------------+---------+------+------+-------------+
| id   | select_type | table         | type  | possible_keys                                                                         | key          | key_len | ref  | rows | Extra       |
+------+-------------+---------------+-------+---------------------------------------------------------------------------------------+--------------+---------+------+------+-------------+
|    1 | SIMPLE      | recentchanges | index | rc_timestamp,rc_namespace_title,rc_ns_usertext,tmp_3,rc_name_type_patrolled_timestamp | rc_timestamp | 16      | NULL | 5326 | Using where |
+------+-------------+---------------+-------+---------------------------------------------------------------------------------------+--------------+---------+------+------+-------------+
1 row in set (0.00 sec)

It is not using the rc_timestamp index, which creates the issue:

db1081[commonswiki]> EXPLAIN SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges`    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '6' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: recentchanges
         type: index
possible_keys: rc_timestamp,rc_namespace_title,rc_ns_usertext,tmp_3,rc_name_type_patrolled_timestamp
          key: rc_timestamp
      key_len: 16
          ref: NULL
         rows: 4578
        Extra: Using where
1 row in set (0.00 sec)

db1081[commonswiki]> EXPLAIN SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges` FORCE INDEX(rc_timestamp)    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '6' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: recentchanges
         type: range
possible_keys: rc_timestamp
          key: rc_timestamp
      key_len: 16
          ref: NULL
         rows: 23156312
        Extra: Using index condition; Using where
1 row in set (0.00 sec)

db1081[commonswiki]> SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges` FORCE INDEX(rc_timestamp)    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '6' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201\G        
[...]
201 rows in set (0.01 sec)

probably because of the rows: 23156312

Thanks Jaime for putting that clearly - I re-read my message and it was confusing indeed - so thanks for putting that straight.

I guess we can try an analyze table there to see if the optimizer starts using it again or we'd need to modify the query to force it.

Yes, I will depool one server and try that or other things, if that doesn't work.

Change 359394 had a related patch set uploaded (by Jcrespo; owner: Jcrespo):
[operations/mediawiki-config@master] mariadb: Depool db1091 for performance testing

https://gerrit.wikimedia.org/r/359394

Change 359394 merged by jenkins-bot:
[operations/mediawiki-config@master] mariadb: Depool db1091 for performance testing

https://gerrit.wikimedia.org/r/359394

This is the actual explain running:

db1091.eqiad.wmnet[commonswiki]> SHOW EXPLAIN FOR 2780971629\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: recentchanges
         type: ref
possible_keys: rc_timestamp,rc_namespace_title,rc_ns_usertext,tmp_3,rc_name_type_patrolled_timestamp
          key: rc_name_type_patrolled_timestamp
      key_len: 5
          ref: const,const
         rows: 2279688
        Extra: Using index condition; Using where; Using filesort
1 row in set, 1 warning (0.00 sec)

It is using rc_name_type_patrolled_timestamp.

Maintenance will take a bit because:

db1091.eqiad.wmnet[commonswiki]> SELECT count(*) FROM recentchanges;
+----------+
| count(*) |
+----------+
| 59632053 |
+----------+

Would it be worth to force the index on code too? Even if it gets fixed after the analyze, it can happen again anytime whenever the optimizer decides to go dumb again :(

Refreshing the stats didn't help:

db1091[commonswiki]> ANALYZE TABLE recentchanges;
+---------------------------+---------+----------+-----------------------------------------+
| Table                     | Op      | Msg_type | Msg_text                                |
+---------------------------+---------+----------+-----------------------------------------+
| commonswiki.recentchanges | analyze | status   | Engine-independent statistics collected |
| commonswiki.recentchanges | analyze | status   | OK                                      |
+---------------------------+---------+----------+-----------------------------------------+
2 rows in set (21 min 13.80 sec)

We should try to avoid it, look at this new queries- they break when the index hint is added:

root@db1091[commonswiki]> SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges` IGNORE INDEX(rc_name_type_patrolled_timestamp)    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '0' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201;
^CCtrl-C -- query killed. Continuing normally.
ERROR 1317 (70100): Query execution was interrupted
root@db1091[commonswiki]> SELECT  rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid  FROM `recentchanges` FORCE INDEX(rc_timestamp)    WHERE (rc_timestamp>='20170601091948') AND (rc_timestamp<='20170610091948') AND rc_namespace = '0' AND rc_type = '3'  ORDER BY rc_timestamp ASC,rc_id ASC LIMIT 201;
^CCtrl-C -- query killed. Continuing normally.
ERROR 1317 (70100): Query execution was interrupted

This seems to be a commons specific problem- causing issues when there is many recentchanges related to the non-0 namespace. We can make it conditional, adding an ignore only when using a separate namespace. I will check the code.

@Marostegui where did you get that query? I cannot see any reference to recentchanges on ShortPagesPage on HEAD.

I think all this analysis is valid, but for a completely different query; /w/api.php?format=xml&rawcontinue=1&maxlag=3&action=query&list=recentchanges&rcstart=20170601132020&rcend=20170610132020&rcdir=newer&rcnamespace=6&rclimit=200&rctype=log, otherwise offtopic here.

@Marostegui where did you get that query? I cannot see any reference to recentchanges on ShortPagesPage on HEAD.

I tested the link on the original task description and then when it timed out I looked for it on logs:https://logstash.wikimedia.org/goto/28e84cfa29b92ce9dccd4c6f7faa46c3

I am 100% sure that the bad query here is:

function: ShortPagesPage::reallyDoQuery
message: Read timeout is reached (10.64.48.19)
query: SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` FORCE INDEX (page_redirect_namespace_len) LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace IN ('6','0')  AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51
db1091[commonswiki]> EXPLAIN SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` FORCE INDEX (page_redirect_namespace_len) LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace IN ('6','0')  AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: page
         type: range
possible_keys: page_redirect_namespace_len
          key: page_redirect_namespace_len
      key_len: 5
          ref: NULL
         rows: 27634594
        Extra: Using index condition; Rowid-ordered scan; Using filesort
*************************** 2. row ***************************
           id: 1
  select_type: SIMPLE
        table: page_props
         type: eq_ref
possible_keys: PRIMARY,pp_propname_page,pp_propname_sortkey_page
          key: PRIMARY
      key_len: 66
          ref: commonswiki.page.page_id,const
         rows: 1
        Extra: Using where; Using index; Not exists
db1091.eqiad.wmnet[commonswiki]> SHOW EXPLAIN FOR 2780971083\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: page
         type: range
possible_keys: page_redirect_namespace_len
          key: page_redirect_namespace_len
      key_len: 5
          ref: NULL
         rows: 27634607
        Extra: Using index condition; Rowid-ordered scan; Using filesort
*************************** 2. row ***************************
           id: 1
  select_type: SIMPLE
        table: page_props
         type: eq_ref
possible_keys: PRIMARY,pp_propname_page,pp_propname_sortkey_page
          key: PRIMARY
      key_len: 66
          ref: commonswiki.page.page_id,const
         rows: 1
        Extra: Using where; Using index; Not exists
2 rows in set, 1 warning (0.01 sec)

works much better without the force index:

db1091.eqiad.wmnet[commonswiki]> EXPLAIN SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace IN ('6','0')  AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: page
         type: index
possible_keys: name_title,page_redirect_namespace_len
          key: page_len
      key_len: 4
          ref: NULL
         rows: 101
        Extra: Using where
*************************** 2. row ***************************
           id: 1
  select_type: SIMPLE
        table: page_props
         type: eq_ref
possible_keys: PRIMARY,pp_propname_page,pp_propname_sortkey_page
          key: PRIMARY
      key_len: 66
          ref: commonswiki.page.page_id,const
         rows: 1
        Extra: Using where; Using index; Not exists
2 rows in set (0.00 sec)

The special thing about commons is that it adds 6 to the list of content namespaces:

'+commonswiki' => [ 6 ], // T167077

On large wikis with extra Content namespaces like eswiki, the original query is very slow too, (5 seconds) while without the force it is more or less similar, with a different plan:

With Force:

+----------------------------+---------+
| Variable_name              | Value   |
+----------------------------+---------+
| Handler_commit             | 1       |
| Handler_delete             | 0       |
| Handler_discover           | 0       |
| Handler_external_lock      | 0       |
| Handler_icp_attempts       | 1340038 |
| Handler_icp_match          | 1340038 |
| Handler_mrr_init           | 1       |
| Handler_mrr_key_refills    | 0       |
| Handler_mrr_rowid_refills  | 20      |
| Handler_prepare            | 0       |
| Handler_read_first         | 0       |
| Handler_read_key           | 1033    |
| Handler_read_last          | 0       |
| Handler_read_next          | 1340030 |
| Handler_read_prev          | 0       |
| Handler_read_retry         | 0       |
| Handler_read_rnd           | 1340030 |
| Handler_read_rnd_deleted   | 0       |
| Handler_read_rnd_next      | 0       |
| Handler_rollback           | 0       |
| Handler_savepoint          | 0       |
| Handler_savepoint_rollback | 0       |
| Handler_tmp_update         | 0       |
| Handler_tmp_write          | 0       |
| Handler_update             | 0       |
| Handler_write              | 0       |
+----------------------------+---------+

Without force:

+----------------------------+---------+
| Variable_name              | Value   |
+----------------------------+---------+
| Handler_commit             | 1       |
| Handler_delete             | 0       |
| Handler_discover           | 0       |
| Handler_external_lock      | 0       |
| Handler_icp_attempts       | 0       |
| Handler_icp_match          | 0       |
| Handler_mrr_init           | 0       |
| Handler_mrr_key_refills    | 0       |
| Handler_mrr_rowid_refills  | 0       |
| Handler_prepare            | 0       |
| Handler_read_first         | 1       |
| Handler_read_key           | 1031    |
| Handler_read_last          | 0       |
| Handler_read_next          | 2318393 |
| Handler_read_prev          | 0       |
| Handler_read_retry         | 0       |
| Handler_read_rnd           | 0       |
| Handler_read_rnd_deleted   | 0       |
| Handler_read_rnd_next      | 0       |
| Handler_rollback           | 0       |
| Handler_savepoint          | 0       |
| Handler_savepoint_rollback | 0       |
| Handler_tmp_update         | 0       |
| Handler_tmp_write          | 0       |
| Handler_update             | 0       |
| Handler_write              | 0       |
+----------------------------+---------+

However, with the patch, dewiki and wikidata are several times slower.

Looking at the history, it seems the index was added for T28393: Make page_len index actually useful so Special:Longpages and Special:Shortpages will be efficient, specifically to avoid the "use the page_len index, filtering by namespace" plan that's probably making dewiki and wikidatawiki slow.

The only way I can think of to do the query efficiently with multiple namespaces would be something like T149077#2741864, e.g.

(SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` FORCE INDEX (page_redirect_namespace_len) LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace = '6' AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51)
UNION ALL
(SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` FORCE INDEX (page_redirect_namespace_len) LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace = '0' AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51)
ORDER BY value LIMIT 51;

ha, we reached the same conclusion independently!

db1091[commonswiki]> EXPLAIN (SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` FORCE INDEX (page_redirect_namespace_len) LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace IN ('0')  AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51) union all (SELECT  page_namespace AS `namespace`,page_title AS `title`,page_len AS `value`  FROM `page` FORCE INDEX (page_redirect_namespace_len) LEFT JOIN `page_props` ON ((page_id = pp_page) AND pp_propname = 'disambiguation')   WHERE page_namespace IN ('6')  AND page_is_redirect = '0' AND pp_page IS NULL  ORDER BY page_len LIMIT 51) ORDER BY value LIMIT 51\G
*************************** 1. row ***************************
           id: 1
  select_type: PRIMARY
        table: page
         type: ref
possible_keys: page_redirect_namespace_len
          key: page_redirect_namespace_len
      key_len: 5
          ref: const,const
         rows: 198780
        Extra: Using where
*************************** 2. row ***************************
           id: 1
  select_type: PRIMARY
        table: page_props
         type: eq_ref
possible_keys: PRIMARY,pp_propname_page,pp_propname_sortkey_page
          key: PRIMARY
      key_len: 66
          ref: commonswiki.page.page_id,const
         rows: 1
        Extra: Using where; Using index; Not exists
*************************** 3. row ***************************
           id: 2
  select_type: UNION
        table: page
         type: ref
possible_keys: page_redirect_namespace_len
          key: page_redirect_namespace_len
      key_len: 5
          ref: const,const
         rows: 27438257
        Extra: Using where
*************************** 4. row ***************************
           id: 2
  select_type: UNION
        table: page_props
         type: eq_ref
possible_keys: PRIMARY,pp_propname_page,pp_propname_sortkey_page
          key: PRIMARY
      key_len: 66
          ref: commonswiki.page.page_id,const
         rows: 1
        Extra: Using where; Using index; Not exists
*************************** 5. row ***************************
           id: NULL
  select_type: UNION RESULT
        table: <union1,2>
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: NULL
        Extra: Using filesort
5 rows in set (0.00 sec)

That is one hell of a query- however, we do not need to do it on SQL, we can do it programatically, too, and most wikis will only have one call anyway. But this is definitely the direction. Which one of the 2 options seems better for you?

What's the other option besides the union query we both came up with?

What's the other option besides the union query we both came up with?

Keeping the original query unchanged (except for the list), and Doing the union, sort and filter on php (foreach content list list). I do not think optimizing here (lower db cpu vs extra latency) is a huge problem, anyway, as this will not be called frequently.

Change 359404 abandoned by Jcrespo:
SpecialShortPages: Remove index force(page_redirect_namespace_len)

Reason:
It fixes some wikis, but reintroduces another regression.

https://gerrit.wikimedia.org/r/359404

I'm a little wary of making multiple queries and filtering the results in PHP (i.e. the manual union).

  • Pro: The DB probably saves peak memory usage by not having to collect $limit*$numSubqueries rows all at once.
  • Pro: The DB saves CPU usage in not having to filesort those rows, although hopefully sorting isn't that bad for the number of rows involved.
  • Con: Multiple back-and-forths and more data transferred between the DB and the appserver.
  • Con: PHP code to do the merging is likely less efficient than C code in MariaDB. What are the chances we can come up with a better algorithm in PHP than MariaDB uses? What are the chances MariaDB devs thought up better algorithms than we wind up implementing?

Now that we have two things that would need it, I'm tempted to add a method in PHP that would generate the unioned queries and use it for both this and T149077, if we're going to do these with unions.

I agree with what you say, one thing though:

What are the chances we can come up with a better algorithm in PHP than MariaDB uses

Because of the declarative nature of SQL -which can lead to horrible plans- plus the fact that has to be interpreted, while PHP will be compiled-, that is not a huge overstatement in every case. It is not the "right" thing to do, or the rule, but I definitely found in the wild some use cases for doing that (even considering the extra round-trips).

I agree this is not one of those cases.

How much time do you think the extra refactoring would take? Last thing I want is to pressure you, but I would like to have something for this commons special page, even if it is hack for now (as it is causing noticeable problems for users).

Thank you for the help, BTW.

Change 359502 had a related patch set uploaded (by Anomie; owner: Anomie):
[mediawiki/core@master] Adjust Shortpages query with multiple content namespaces

https://gerrit.wikimedia.org/r/359502

This "Unbreak Now" priority task has not seen updates for two weeks. Who plans to review https://gerrit.wikimedia.org/r/#/c/359502/ ? Is this still UBN prio?

Who plans to review

I think nobody, so we should just merge it- worst case scenario, we break what it was already broken.

Wondering if T169908 is another instance of this?

Wondering if T169908 is another instance of this?

Doesn't look like it. T169908 seems to be lock contention rather than a slow query that needs changing to be able to use indexes appropriately.

I do not know much about mediawiki responsabilities- but shouln't this be something that either either reading or editing should be owning? As it affects either readers or contributors (or both)- asking from the ignorance, and I may be wrong.

We just received an email report complaining about this breakage.

I'll review this once I'm free, in about half an hour. As for the responsible team, I think that's probably the MediaWiki team in this case, but now that I've seen it anyway I'll take a quick look at it.

Change 359502 merged by jenkins-bot:
[mediawiki/core@master] Adjust Shortpages query with multiple content namespaces

https://gerrit.wikimedia.org/r/359502

After yesterday's release, this no longer fails for me.
@Jcb can you confirm this is resolved?

I don't think file pages (ns 6) are wanted for that query on Commons. It is locally named "Short galleries" (for ns 0). the fact that a lot of files only have 14 bytes of (texplated) text, isn't really something anybody will be interested in ... I believe....

@Josve05a That is a completely different matter (I am not saying you are not right, I am saying it is out of scope here). This is about a mediawiki software bug of the page not showing results and failing with a database error. The reason was that it generated inefficient queries when using multiple-namespaces. That issue in particular has already been solved (for what I can see) thanks to the kind help of @Anomie and @Catrope. It used to affect meta, too: https://meta.wikimedia.org/wiki/Special:ShortPages which now works.

For what I can see, that change you mention was introduced on T167077 and previously discussed on https://commons.wikimedia.org/wiki/Commons:Village_pump#Should_content_pages_consist_of_galleries_only_or_also_include_File_pages.3F which most people seemed to agree at first but all people may not be aware of all consequences of the change (like the one you mention).

I totally understand your point and even agree personally with it. I would suggest to raise the issue on the wiki discussion and I think any decision could be implemented now (including the complete or partial reversion of the definition of content pages on Commons)- but on a different (new) ticket with the Wikimedia-Site-requests tag.

Please remove the file pages from the results, the report is completely useless now.

@Jcb Adding images as "content page"s did that. It can be reverted with no problem, I am just saying that such a request has to be filed on a separate ticket, or it will get lost here.

In other words, by doing what was requested on https://commons.wikimedia.org/wiki/Commons:Village_pump#Should_content_pages_consist_of_galleries_only_or_also_include_File_pages.3F , it modified the ShortPages behaviour (as it uses content pages).

This is just for the database error- the content of that page was not altered with the fix, it was altered on T167077, please complain there :-) .

jcrespo assigned this task to Anomie.

AFAIK @Jcb @Josve05a your (more than reasonable) complains are being handled seriously already at T170687: [[special:ShortPages]] includes file pages on Commons, I am going to close this ticket (Special:ShortPages does not load in Wikimedia Commons: "Read timeout is reached" aka query performance issues) as resolved with the original scope (it does not error out anymore), with no prejudice for the list to change its behaviour to a more useful one. Thanks for the input and thanks to all people that collaborated on the initial error fix.

mmodell changed the subtype of this task from "Task" to "Production Error".Aug 28 2019, 11:10 PM

Change 927258 had a related patch set uploaded (by Krinkle; author: Krinkle):

[mediawiki/core@master] SpecialShortPages: Document reason for "reallyDoQuery" override

https://gerrit.wikimedia.org/r/927258

Change 927258 merged by jenkins-bot:

[mediawiki/core@master] SpecialShortPages: Document reason for "reallyDoQuery" override

https://gerrit.wikimedia.org/r/927258