Bold-Revert-Discuss is an essay about good wiki practices. Not all reverts need discussion, but many do. Which reverts tend to get discussion?
It's possible that we would be able to find indicators that curation action (like a revert for damage) is going to cause substantial disagreement?
We might train a learning machine to flag these scores and prioritize them for human (or maybe human-who-likes-to-talk-to-other-humans) review.