Page MenuHomePhabricator

Explore design solutions for finding users to help reviewing
Closed, ResolvedPublic

Description

This ticket captures design ideas to help reviewers to find good-faith editors and encourage them to help newcomers in a constructive way. This is the goal of the Edit Review Improvements project.

An overview of the areas considered is captured in a diagram below, sub-tickets will be created to capture the different ideas in more detail.

Mentor - Overview.png (859×1 px, 155 KB)

Areas to explore

  • T137814: Review feeds. A central place that provides an overview of the user review activity and encourages constructive follow-up actions.
  • T138935: Contribution filtering. Filters for contributions (on recent changes or dedicated pages) need to be improved to target the good-faith criteria and provide the flexibility needed to define custom review feeds.
    • T142785 considers extending the current Recent Changes page to better support finding good-faith newcomers.
  • T138808: Invite to become a reviewer. Experienced editors can help newcomers, we want to let them know they are invited to do so.
  • T138815: Editors asking for help. Allowing newcomers the possibility of asking for help as soon as possible can help to surface their good intentions and make the interaction with reviewers more fluent.
  • T139064 Entry points and integrations. Identify existing areas related to the reviewing activity to connect the proposed solutions with them.
  • T138939 Reviewing contributions. Allow users to view the changes of a feed, evaluate them and perform follow-up actions. Encouraging constructive behaviour while keeping efficiency is the main balance to achieve. This is not the main area of exploration, but it is interesting to check how of the previos concepts can support the reviewing activity.

Prototype

To view how those ideas fit together you can view this video walkthrough.

The video is based on a prototype you can interact with, but note that it only supports the interactions needed to illustrate a few scenarios.

Plans to research these ideas captured at T140161: Page Curation user workflows and Edit Review prototype concept validation

Related Objects

Event Timeline

Pginer-WMF updated the task description. (Show Details)
Pginer-WMF added a project: Design.
Pginer-WMF added subscribers: jmatazzoni, Catrope.

Recent changes filtering. Filters on recent changes need to be improved [..]

It seems one of the most straightforward ideas - Special:RecentChanges page has 'Legend' - flags for changes that are e.g. new, potentially damaging, or need to be reviewed (btw, the text still refers to the patrol action, not review - 'This edit has not yet been patrolled'). If filters by those flags will be added, that adds more efficiency to the monitoring process.

Invite to become a reviewer.

Might be done along with the existing notification - e.g. 'Congratulations on your 100th edit! You're invited to become a mentor editor. '

Editors asking for help. Allowing newcomers the possibility of asking for help as soon as possible

New editors (or editors with less than 20 successful edits) may see some added icon/button/template when they start editing that would remind them to ask for other editors help.

In order to facilitate discussion and research on the ideas I created a prototype and a video showing several of the initial design ideas in context.

Hey @Pginer-WMF! The video is really great for reviewing your designs in context. Really cool to see this. I have a few thoughts/questions.

I'm looking at the [Good-faith ooo] widget and I'm not sure how it will express the lack of good-faith (bad faith). I've just talked to @jmatazzoni about this and it seems that the scoring system is a little bit counter-intuitive. When the "goodfaith" model predicts 50%, it's at it's most unsure. When the model predicts 0%, it's very confident that the edit is intentionally damaging (vandalism) and when the model predicts 100%, it's very confident that the edit is not *intentionally* damaging -- and may not be damaging at all!

Re. sequential reviews of multiple edits in a feed. It seems like you go back and forth between the diff view and the feed. I see some buttons in the upper right for moving between changes. It seems to me that patrollers will be most interested in the flow where they review 100+ edits/users/pages/etc. in quick succession. Could you describe/demo how you imagine that working. I imagine that patrollers and socializers alike will be very interested in the efficiency of this flow for their work.

One more thought. I think the 10-day (or other short period) monitoring is a great idea. I was worried about long-term watchlisting of users' activities, but a short-term watchlisting could both mitigate the potential stalking/harassment issues and in the end, it would be more useful because I don't need to spend the time removing users from my "mentor" list.

Thanks @Halfak, for the feedback. As usual, interesting and relevant comments.

I'm looking at the [Good-faith ooo] widget and I'm not sure how it will express the lack of good-faith (bad faith).

The different criteria are expressed as filters in the designs. Filters are considered as flexible mechanisms in this context:

It is ok to provide different filters for opposing concepts or filters that overlap. For example, we can provide a "Good-faith" filter that shows the edits with a ORES score from 60% to 100%. The full dots will indicate that those are good faith (e.g., one dot 60-70%, two dots: 70-80%, three dots 90-100%).

With the "Good faith" filter won't be matching anything that we are not minimally confident is in good faith. For each confidence range there could be also specific filters (e.g., "Good-faith (high confidence)").

We can also provide a filter about the opposite concept ("Bad faith") that covers contributions in the range 0-40% of the score. In this case, the three dots will indicate how confident we are this is bad-faith (one dot: 30-40% score, two dots: 20-30% score or three dots: 0 -10% score ).

Any filter can be reversed but it may still make sense to provide symmetrical filters. For example, "patrolled" and "pending patrol" are the exact opposite but having both may be convenient instead of providing one and relying on the user to revert it.

Filters can be used to either (a) show only the items that match them or (b) highlight the items that match but not hide the rest. This system provides considerable flexibility on what to target: view only edits that are considered good-faith and highlight those for which we are highly confident are good-faith.

Re. sequential reviews of multiple edits in a feed. It seems like you go back and forth between the diff view and the feed.

The feeds are presented in three views at different levels: the list of feeds, the list of contributions of a feed, and the review tool to view one specific contribution in detail.

I agree that it is not effective for reviewers to move back and forth across these views. I expect the "list of contributions" to be a place to calibrate the filters or look for something specific but review sessions to happen with the review tool. For that purpose, the tool includes navigation controls that allow users to move to the next/previous item of the current feed.

The review tool is not considered for the initial steps of the project, so there is still a lot to figure out but optimising for this repeated use is something we have in mind.

One more thought. I think the 10-day (or other short period) monitoring is a great idea. I was worried about long-term watchlisting of users' activities, but a short-term watchlisting could both mitigate the potential stalking/harassment issues and in the end, it would be more useful because I don't need to spend the time removing users from my "mentor" list.

This is great to hear.

There were some pieces of feedback that came repeatedly during research and discussions: the value of human-to-human interaction, the desire for low commitment (reviewers mentioning that once it become more like work and less like fun they get demotivated) and the risks of anything that involves some degree of keeping an eye on other users.

I think that the lightweight model may be a promising direction, but this is a really complex issue.
I'm especially interested in exposing this model to users during our research to anticipate concerns and ways in which it can be abused so that we can work on prevent them (e.g., providing a feedback channel for those reviewed, a way to exit the mentoring for those uncomfortable of being "tracked", limiting it to those users that explicitly requested help, etc.).

jmatazzoni claimed this task.

Resolving this. let me know if anyone wants to keep it open.