Please respond to the following questions, and provide as much detail as possible for each.
- Problem: What problem are you facing that could be resolved or mitigated with infrastructural improvements?
In order to make the Recent Activity module that displays edits which are filtered by their revert risk score in the Moderator Dashboard widely available T408388, we'd like the Machine Learning team to assist with deployment of the revert risk model to this list P84306 of wikis that don't have goodfaith/damaging edits as it did a quarter ago T348298
- [Optional] Possible solutions: What infrastructural improvement(s) would most meaningfully help you with this problem? Feel free to suggest multiple ideas.
As discussed on Slack, Machine Learning could help in coming up with running an Analysis to get Thresholds for the Wikis in a similar way to idwiki and add this to MediaWiki config, after which Moderator Tools will be able to turn on the model and run a script to backfill scores for edits when deploying to those wikis.
- Enabled projects: Which specific user-facing features or experiments would be unblocked or meaningfully enabled (in terms of development ease, velocity, etc.) by solving this problem? Which teams are launching these features or experiments?
The Recent Activity Module on the Moderator Dashboard(PersonalDashboard).
- Urgency and importance: When are these features or experiments expected to launch? How essential is this infrastructure for unblocking development?
The Moderator Dashboard is expected to launch by end of November so getting the thresholds before then would be ideal for the hypothesis.
- [Optional] Notes: Is there anything else you'd like to share?
The main work around the roll out is being tracked on T408388