I just realised that we had a wikidatawiki vslow host with the following query:
It was running for more than 9h, it was creating some lag on that host.
+------+-------------+----------+------+------------------------------------------------+-----------------------------+---------+---------------------------+----------+---------------------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +------+-------------+----------+------+------------------------------------------------+-----------------------------+---------+---------------------------+----------+---------------------------------------------------------------------+ | 1 | SIMPLE | page | ref | PRIMARY,name_title,page_redirect_namespace_len | page_redirect_namespace_len | 1 | const | 35724007 | Using index condition; Using where; Using temporary; Using filesort | | 1 | SIMPLE | revision | ref | page_timestamp,page_user_timestamp,rev_page_id | rev_page_id | 4 | wikidatawiki.page.page_id | 8 | Using index | +------+-------------+----------+------+------------------------------------------------+-----------------------------+---------+---------------------------+----------+---------------------------------------------------------------------+ 2 rows in set (0.00 sec)
It looks like it was coming from a cronjob, is this really needed or is there a way to optimize this into multiple chunks/processes?
Please feel free to change/remove/add tags as needed.