Logstash is full of warnings like this:
```
Expectation (readQueryTime <= 5) by ApiMain::setRequestExpectations not met (actual: 41.6174659729):
query: SELECT rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid FROM `recentchanges` INNER JOIN `ores_model` ON...
```
The query in question is
```
mysql:wikiadmin@db1080 [enwiki]> DESCRIBE SELECT rc_id,rc_timestamp,rc_namespace,rc_title,rc_cur_id,rc_type,rc_deleted,rc_this_oldid,rc_last_oldid FROM `recentchanges` INNER JOIN `ores_model` ON ((oresm_name = 'damaging' AND oresm_is_current = 1)) INNER JOIN `ores_classification` ON ((rc_this_oldid = oresc_rev AND oresc_model = oresm_id AND oresc_class = 1)) WHERE rc_type IN ('0','1','3','6') AND (oresc_probability > '0.49') ORDER BY rc_timestamp DESC,rc_id DESC LIMIT 11 ;
+------+-------------+---------------------+------+-----------------------------------------+-------------------+---------+------------------------------------+---------+-----------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+---------------------+------+-----------------------------------------+-------------------+---------+------------------------------------+---------+-----------------------------------------------------------+
| 1 | SIMPLE | ores_model | ref | PRIMARY,oresm_version,ores_model_status | ores_model_status | 35 | const,const | 1 | Using where; Using index; Using temporary; Using filesort |
| 1 | SIMPLE | recentchanges | ALL | tmp_1 | NULL | NULL | NULL | 9933340 | Using where; Using join buffer (flat, BNL join) |
| 1 | SIMPLE | ores_classification | ref | oresc_winner | oresc_winner | 4 | enwiki.recentchanges.rc_this_oldid | 1 | Using where |
+------+-------------+---------------------+------+-----------------------------------------+-------------------+---------+------------------------------------+---------+-----------------------------------------------------------+
```
Even for scanning 10 million rows, 50 seconds seems pretty extreme. And there should be no reason to do that: without the limit, the query matches about a million rows so probably the limit of 11 should be hit after a few hundred.