Page MenuHomePhabricator

Structured tasks: Conduct user testing to compare two approaches to the Add links structured task
Closed, ResolvedPublic



The Growth team is introducing the first of a new type of structured task (f.k.a. microcontributions) into our experimental features for newcomers, as our hypothesis is that these smaller, specific edits will lead to more people trying a first edit (activation), and staying to make more edits afterwards (retention) of the same type, as well as progressing to other less structured, specific edits.

In exploring a workflow design that can satisfy the desired goals of both enabling simpler, specific edits that users can complete; whilst providing an environment conducive to newcomers progressing beyond these edits, we identified and created two distinct approaches:

  • A. Teaching-centered approach - which provides guidance to complete the specific structured task within the existing editor framework;
  • B. Structure-specific, “Volume” approach - which provides a UI focusing users on completing the specific editing task (in this case reviewing machine-recommendations for text to become interlinks), thereby providing users with a faster way to complete edits with less context.
Goals of this user testing

Primary objective
By comparing how users interact with the two different approaches at this early stage, we will better understand whether one or the other is better at providing users with good understanding and ability to successfully complete structured tasks, and to set them up for other kinds of editing afterward.
As a result, we hope to be able to move forward more confidently to implement a solution that best serves our ultimate objective of ensuring newcomer retention via doing structured tasks as a worthwhile contribution in and of itself, as well as an introduction to progressively more unstructured, larger edits.
Secondary objective
Assess users’ overall understanding of the current newcomer experimental features, and identify opportunities for usability improvements.

Testing format

Due to time and personnel constraints, this will firstly be an unmoderated remote task-based test, conducted using to recruit participants and record sessions in English only.

The same set of tasks and questions will be asked to all participants, with half of the participants using Design A prototype, and the other half using Design B.


The intention will be to screen for respondents who are new to editing on Wikipedia.

Test protocol

Details on of the research brief

Key findings and next steps
  • We are moving forward in the direction fo Concept A
    • User tests did not show advantages to Concept B.
    • Concept A offered more exposure to rest of editing experience, and better context for users sometimes confused that Concept B was showing an entire article, amongst other minor mismatched expectations encountered more in Concept B
    • Certain usability improvements in Concept B will be added in the next Concept A iteration, namely:
      • Suggested edits module showing a visible Edit button
      • Showing the recommended link words in the Suggested edits module task explanation
      • A more comprehensive summary of the user's actions in reviewing the links
  • “AI” was well understood as a concept and term by our English-based users.
    • Recommendation: Further explore how well this and other terms translate to other languages
    • Recommendation: In copy revisions, keep the term concise and provide links to enable users to learn more about the AI suggestion feature outside of the main UI.
  • Participants often reflexively dismissed onboarding information reflexively without reading the information (even in test settings)
    • Recommendation: Make onboarding copy more succinct (one or two ideas per screen), and also accessible at multiple points (not only as a first-time use event)
    • Recommendation: Provide more specific, "real world" examples within onboarding, potentially including a quick 'practice' link task
  • Whilst *Pre-task* understanding of what the "Add links" task entailed was low, once participants got to the task itself understanding and ability to complete the task was very high.
    • Recommendation: Make copy changes of the task name from “Add suggested links” to “Add suggested links between articles” (closer to what it is in production now)
  • Participants generally focus on the card contents (wikidata description, match between link target and text) to evaluate links. This was true for A and B.
    • Recommendation: Amend the copy in the task card so that users are reminded to review the link in context of the article. For example, changing the text prompt from “Should X link to ______?” .to “Should X link to the Wikipedia article for _______?” to help clarify what is being evaluated.
    • Recommendation: Replace the wikidata description with the article lead extract instead (since often in smaller wikis their may be articles missing these descriptions)
  • The edit icon was familiar to most people. Even in Concept A with two separate edit icons, users understood that the pencil in the card would alter the linked article and different from editing a typo.
    • Recommendation: explore ways to incorporate "normal" editing as optional entry points in the task. For example, incorporate the option to correct the link destination when a link recommendation has been rejected for being the “wrong link destination” by the user.
    • Recommendation: provide user education about the ability to not only review AI suggestions, but also edit in other ways during the task.
  • Being able to see their published edits was important for both Concept A and B users.
    • Recommendation: Provide a clear ways for the user to look at their edit as shown in Concept B (link from the "Thanks" message)
  • Seeing the rejection reasons seemed to help users think more critically about their decisions
    • Recommendation: Explore exposing or providing more easy access to this list for users
    • Recommendation: Refine the rejected options listed
  • Other minor usability suggestions from the study:
    • Add more affordance to reassure the users when clicking on a link suggestion to check it will not result in them leaving the link review workflow
    • An explicit "Skip" button in the task
    • An explicit “Publish” option or auto-advance so that users know how to submit the suggestions.

Detailed findings available in the presentation:

Event Timeline

RHo updated the task description. (Show Details)
RHo renamed this task from Structured tasks: Write user testing brief to compare two approaches to the Add links structured task to Structured tasks: Conduct user testing to compare two approaches to the Add links structured task.Oct 14 2020, 7:28 PM
RHo updated the task description. (Show Details)