In T147710#2811352, @Neil_P._Quinn_WMF wrote:
A couple of months ago, I was thinking that it would be very useful to have a model that detects conflict-of-interest and promotional editing. This seems like it might be an area that would benefit from AI support, since it's relatively common; considerably harder to address and detect than simple vandalism; and has a substantial population of COI editors who would probably follow white-hat editing guidelines if a volunteer introduced them. However, I don't have the expertise to determine if it is actually a tractable problem for AI, so a wishlist would be a good place to register the idea.
One thing to consider, though, is that a COI model should probably be developed in private, because making it public would be a tremendous giveaway to black-hat COI editors. We've raised this concern before about vandalism models (we don't want to train better vandals), but very few vandals are likely to take the time to learn enough about the system to game it. That's not true of COI editors who have a strong financial motivation to do so. A central wishlist could also help catalog high-level concerns like this.