As seen at https://tendril.wikimedia.org/report/, we have a bunch of crawlers of various types hitting non-existent pages. We do a move/delete log query on such page views...which is fine except when lots of queries come in at once. They end up taking 16s to 18s.
Possible solution is to avoid calling the LogEventList method in showMissingArticle based on a Bloom Filter in Redis. This would be updated on the fly. Not sure how to estimate the set size to keep the false hit rate down.