**[[ https://www.mediawiki.org/wiki/Moderator_Tools/Automoderator | Automoderator ]]** is a tool under development by the Moderator Tools team. It uses the revert risk models, developed by the Wikimedia Foundation Research team, to score edits, reverting any which score higher than a community-configurable threshold. There are two versions of this model:
* A [[ https://meta.wikimedia.org/wiki/Machine_learning_models/Proposed/Multilingual_revert_risk | multilingual model ]], with support for 47 languages.
* A [[ https://meta.wikimedia.org/wiki/Machine_learning_models/Proposed/Language-agnostic_revert_risk | language-agnostic ]] model.
As of September 2024 we are using the language-agnostic model, but investigating a shift to the multilingual model for supported wikis. For the purpose of global support for Automoderator detailed below, we would only be using the language agnostic model. These models only support the main namespace of Wikipedia projects.
Automoderator can be configured at each wiki on which it is deployed via #mediawiki-extensions-communityconfiguration. However, we have a long tail of Wikipedias with very few active administrators or patrollers, who may not have the capacity or knowledge to configure Automoderator for their wiki. On such wikis a substantial volume of the anti-vandalism workload is carried out by global editors, who monitor edits across dozens or hundreds of wikis. In particular, Global Sysops and Stewards carry out administrative workflows. These editors would like Automoderator to run on small wikis, with control in the hands of trusted global contributors.
We could imagine giving [[ https://meta.wikimedia.org/wiki/Stewards | Stewards ]] a global configuration to control the running of Automoderator across these wikis. This would probably entail a single configuration that applies to all wikis, with on/off switches per wiki, in case of individual issues with its running. Local configuration should take priority if it exists, in case a wiki becomes interested in local configuration.
We would want a centralised monitoring dashboard which presented high-level data about Automoderator's behaviour on those wikis, such as rate of reverts.
**Questions**
Exclusively technical questions are in T372413.
* Q: Should this be Steward-only functionality, or also open to Global sysops?
* A: Stewards and Meta admins control this kind of functionality for other tools (e.g. AbuseFilter), so it makes sense that they would also be the groups to configure Automoderator.
* Q: What would Automoderator be called on these wikis? It probably isn't feasible to name it uniquely for each wiki.
* A: We can make a firm decision about this later, but it can be named the same thing everywhere, such as 'Automoderator'.
* How do we enable a false positive reporting flow for these wikis?
* What data about Automoderator's behaviour would be needed in a centralised dashboard?
* Q: How would we handle a community migrating from global control of Automoderator to local control?
* A: Local consensus, communicated to global editors, should be sufficient. Stewards won't object if there is local consensus. There's an additional technical question about how to facilitate this.
* How can local communities view Automoderator's configuration and speak to Stewards about it?
* Perhaps we could have a Meta discussion page, and link to this + configuration on the global user page.