As mentioned by @ori during out last meeting, we should come up with the main metrics we aim to move the needle on as a team and decide what form it should take in terms of dashboard.
Some ideas we might want to explore further. This is my take on it, and can be subject to further debating:
- Maybe all our core performance metrics should be expressed in terms of end-user experience. I.e. "as a reader how long does it take for me to be able to view an article". This pretty much maps to firstPaint 1-to-1. If we go down this road, we should always remember to start the thinking from the UX, and not start from the technical data. This vast amount of data we have can trick us into tracking the wrong thing. Another UX metric idea that came up was "as an editor, how long does it take for my contribution to be visible by everyone". We should probably brainstorm more of those in this task.
- The distribution of our data can be very imbalanced. Should we stop looking at overall metrics as a default and for example use HTTPS US traffic as the only source for our UX-based metrics like the ones mentioned above, for example? Then the main vectors of distribution imbalance like HTTP vs HTTPS and geographical network differences would tracked on their own. The rationale here is that things like growth in specific geos or HTTPS adoption could completely throw off our UX-based metrics and make it hard, if not impossible, to track that our efforts have an impact by looking at a dashboard. Protocol and geography might not be the only things to isolate out of our UX-based metrics. Going back to the UX mindset, HTTP(s) could be a dedicated metrics like "How much slower is browsing the website in a secure manner?", and the geography factor could be "Wikis should be the fastest they can be regardless of where I live".