There are two main questions we want to answer using quantitive metrics. Information necessary to answer both question is logged in the [EditAttemptStep data stream](https://meta.wikimedia.org/wiki/Schema:EditAttemptStep), although we should first start oversampling all mobile visual editor events so we have plenty of data available.
We could look at these question in an A/B test, but for a feature of this size, it doesn't seem worth it. Instead, we will simply roll out the feature and compare the data from before and after.
**Do the load screen improvements...**
1. **...change how many users stick with their edit attempt long enough for the interface to fully load?**
* This is our main metrics: we hope to increase the proportion of users make it through the loading process (technically, the //ready rate//) although the current rate is already 95%, so there's not that much room for improvement. Even if we don't see an improvement, that doesn't invalidate the case for the project since another reason for doing this is making users feel more confident using the editor, which we can't easily test quantitively.
2. **...change the overall load times?**
* This is a guardrail metric: we want to make sure that adding this complexity to the loading process doesn't end up increasing the overall load time (technically, the //ready time//).
The deploy is tentatively planned for the **week of 11 March**.