T314384 will introduce new machine learning models to make patrollers aware of suspicious edits so that they can decide whether said edit ought to be reverted.
This task involves the work of reusing the "revision risk" score T314384 will assign to every edit, across all Wikipedias, in a way that enables Product-Analytics to use this "revision risk" data to assess edit quality in the feature analyses they do.
Story
- As the Editing Team's product manager, I need to be able to assess the quality of edits people are making with the contribution tools/experiences we are responsible far, so that we can answer questions like Should this contribution tool/experience be made more/less widely available?, What – if any – changes might we need to make to this tool/experience in order to improve the quality of changes people use it to publish?, and How does the quality of changes people are using a given tool/experience to publish compare to the quality of changes people are publishing with the previous/legacy tool/experience?.
- As a Data Scientist...
Requirements
@MNeisler to fill in
@MNeisler raised this idea in the 13 Sep meeting (private doc) with @diego, @nayoub, and @ppelberg.