If a page takes longer than N seconds to parse, add it to a hidden tracking category ('Slow pages' or whatever). Care must be taken not to inundate the category with entries in case of a partial outage causing all parse operations to be slow.
We may also want to redefine the current scope of "slow-parse".
Right now, if I understand correctly, slow-parse is only used when an article is parsed on-demand in a GET request (PoolWorkArticleView; presumably triggered when parser cache expired, or when logged-in users view articles with a user language other than the content language, or when the N+1th user views an article while the latest edit is still being parsed/saved).
However I imagine that in the majority of cases, articles are parsed in the POST request of the edit. And no matter how slow they parse, it never shows up in slow-parse. We may want to add slow-parse instrumentation to the general path as well.
This could be done via addTrackingCategory(), memcached tricks to avoid mass addition, along with checking if a page is already in the category to avoid mass removal.
I'm not sure what the end goal is or if there is a enough use case to justify this.
One related thing that might be more useful would be tracking tiers of render slowness (5-10sec,10-20,20+) in redis or something and warning (or blocking?) when pages would go into the next tier on edit.