When we set out to ask Wikipedia visitors their opinion of page load performance, our main hope was to answer an age-old question: which RUM metric matters the most to users? And more interestingly, which ones matter the most to our users on our content.
Now that we have a lot of user input with our micro survey running for over a year, we can look at which classic RUM metrics correlate the best to users' perception of the page load performance.
Methodology
We collect user responses from an in-page micro survey asking them if the page load was fast enough. We map their responses to 1 for positive answers, -1 for negative answers and we discard neutral "I don't know" answers. We only look at records where a given RUM metric is present, and for time-based metrics, only if the value is lower than 30 seconds. Beyond that point we know for certain that the experience was terrible or that there was an issue with metric collection.
Results
Metric | Pearson coefficient | Sample size |
domInteractive | -0.149 | 1,709,435 |
firstContentfulPaint | -0.144 | 985,662 |
firstPaint | -0.143 | 1,057,157 |
domComplete | -0.142 | 1,703,476 |
loadEventEnd | -0.142 | 1,703,441 |
loadEventStart | -0.142 | 1,703,388 |
top thumbnail (Element Timing for Images origin trial) | -0.138 | 28,070 |
responseStart | -0.131 | 1,705,859 |
RUMSpeedIndex | -0.129 | 1,319,177 |
secureConnectionStart | -0.128 | 942,602 |
requestStart | -0.120 | 1,694,865 |
connectEnd | -0.119 | 1,596,297 |
redirecting | -0.109 | 33,056 |
domainLookupEnd - domainLookupStart | -0.0965 | 670,932 |
connectStart | -0.096 | 1,595,544 |
netinfoEffectiveConnectionType | 0.0845 | 1,301,435 |
deviceMemory | 0.0663 | 1,286,609 |
fetchStart | -0.0521 | 1,444,012 |
unloadEventEnd - unloadEventStart | -0.03089 | 29,854 |
cpu benchmark score | -0.00615 | 1,696,239 |
transferSize | -0.00208 | 1,358,990 |
Pearson correlation factors can go from 1 to -1, meaning that even our "best" correlations are actually the least terrible ones. Overall RUM metric correlation is quite poor and an indication that they only represent a small part of what constitutes the perceived performance of a page load.
Analysis
There is a clear pattern of environmental properties having the worst correlation. Effective connection type, device memory, available CPU, page transfer size. This might suggest that users are aware of their device, network quality and page size (small vs big article in Wikipedia's case) and adjust their expectations to those factors.
As for actual RUM metrics, it's interesting to see that the top ones are not just the paint metrics, but also domInteractive. The reason they are so close to each other is probably that in Wikipedia's case there are very close metrics in general, due to the absence of 3rd-party assets on our pages.
Conclusion
Thanks to this real-world opinion data, we can make a better educated guess about which RUM metric(s) matter the most to us. It also shows how sub-par existing RUM metrics are in general. We encourage the development on new metrics that capture other aspects of performance than the initial page loader/render, as this part seems well covered already, with seemingly very little difference in terms of correlation to perceived performance between them, at least in our case.
The performance perception micro survey will keep running and will allow us to benchmark future APIs. Which we intend to do with our ongoing Layout Instability API origin trial, for example, once the fixes of the bugs we discovered during the trial have been rolled out.
- Projects
- Subscribers
- None