The Moderator Tools team plans to build an 'automoderator' - T336934: Enable communities to configure automated reversion of bad edits. This tool would enable communities to use machine learning models to automatically prevent or revert vandalism. It could hypothetically run before an edit is saved, or after.
We have assumed, based on previous discussions, that a pre-save check would take too much time to be feasible, negatively impacting edit save times too significantly. But we'd like to explore this in more detail to confirm whether this is the case.
The concrete ask our team has is: How impactful would checking edits against the Language-agnostic revert risk model and/or Multilingual revert risk model be on edit save times?
Additionally, from the 2022 Community Wishlist Survey:
Problem: Abuse Filters are a great way of preventing problematic edits before they happen. However, guessing "problematic" is currently done using user segmentation, common phrases used by vandals etc. We have a much better tool to determine if an edit is destructive: ORES. If we were able to prevent all edits above a certain threshold, the workload on patrollers would be significantly reduced and, possibly, would prevent some communities from requesting and all-out IP-editing ban.
This would have to be blazing fast, and not use any mediawiki API or prediction pre-cache. But I think it could be do'able and is a concrete ask from the community so we should see what we could do.