Inspired by //[GOTO 2016 • What I Wish I Had Known Before Scaling Uber to 1000 Services (Matt Ranney)](https://www.youtube.com/watch?v=kb-m2fasdDY&utm_source=webopsweekly&utm_medium=email)//.
-----
Around 21:00 he mentions that while tooling differs between programming languages and frameworks, it was quite useful to have a uniform performance insight by converting the output of those different tools to a flame graph.
We currently do this only for the MediaWiki run-time (with Xenon for HHVM).
I'd be interesting to get similar statistics going for other run-times and services.
A few suggestions:
* Varnish front-end and back-end. (Wikimedia VCL; operations/puppet)
* RESTBase (Node.js)
* Parsoid (Node.js)
* EventLogging (Python)
* Statsv (Python; analytics/statsv)
* RCStream (Python; mediawiki/services/rcstream)
* MediaWiki front-end JavaScript (maybe capture via headless Chrome as part of asset-check.py in operations/puppet)
It'd be great to be able easily dig into any of these services (both for the teams that maintain these, as well as for e.g. #Performance-Team).