For T163562 we need to run the populateIpChanges.php maintenance script (T175105). This typically goes reasonably fast:
mysql:wikiadmin@db1084 [commonswiki]> EXPLAIN SELECT /* PopulateIpChanges::doDBUpdates */ rev_id, rev_timestamp, rev_user_text FROM `revision` WHERE rev_id >= 1000000 AND rev_user = 0 ORDER BY rev_id ASC LIMIT 200; +------+-------------+----------+-------+------------------------+---------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+----------+-------+------------------------+---------+---------+------+------+-------------+ | 1 | SIMPLE | revision | index | PRIMARY,user_timestamp | PRIMARY | 4 | NULL | 7336 | Using where | +------+-------------+----------+-------+------------------------+---------+---------+------+------+-------------+ 1 row in set (0.00 sec) mysql:wikiadmin@db1084 [commonswiki]> SELECT /* PopulateIpChanges::doDBUpdates */ rev_id, rev_timestamp, rev_user_text FROM `revision` WHERE rev_id >= 1000000 AND rev_user = 0 ORDER BY rev_id ASC LIMIT 200; ... 200 rows in set (0.32 sec)
But runs very, very slowly when we reach revisions with a high rev_id:
mysql:wikiadmin@db1084 [commonswiki]> EXPLAIN SELECT /* PopulateIpChanges::doDBUpdates */ rev_id, rev_timestamp, rev_user_text FROM `revision` WHERE rev_id >= 75000000 AND rev_user = 0 ORDER BY rev_id ASC LIMIT 200; +------+-------------+----------+-------+------------------------+---------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+----------+-------+------------------------+---------+---------+------+------+-------------+ | 1 | SIMPLE | revision | index | PRIMARY,user_timestamp | PRIMARY | 4 | NULL | 7335 | Using where | +------+-------------+----------+-------+------------------------+---------+---------+------+------+-------------+ 1 row in set (0.00 sec) mysql:wikiadmin@db1084 [commonswiki]> SELECT /* PopulateIpChanges::doDBUpdates */ rev_id, rev_timestamp, rev_user_text FROM `revision` WHERE rev_id >= 75000000 AND rev_user = 0 ORDER BY rev_id ASC LIMIT 200; ... 200 rows in set (44.86 sec)
The EXPLAINs however are near identical.
See also https://tendril.wikimedia.org/report/slow_queries_checksum?checksum=ab089fbcbcaaff33193f518ca8298ad4&host=family%3Adb1097&user=&schema=&hours=1 (not sure if this is a permalink)
The commonswiki script will probably finish within the next 12-18 hours, and the other group 1 wikis probably won't take forever. However as you could imagine this isn't going to work for enwiki. Not sure how to improve the query, or what a better approach would be.