Page MenuHomePhabricator

Create a system to encode best practices into editing experiences
Open, Needs TriagePublic

Assigned To
None
Authored By
ppelberg
Oct 9 2020, 6:01 PM
Referenced Files
F35060442: Screen Shot 2022-04-20 at 11.52.10 AM.png
Apr 20 2022, 6:52 PM
F35060440: image.png
Apr 20 2022, 6:51 PM
Tokens
"100" token, awarded by Framawiki."Orange Medal" token, awarded by Krinkle."Like" token, awarded by Sdkb."Love" token, awarded by Trizek-WMF.

Description

🌱 In the forest that is Phabricator, this ticket is very much a seedling on the forest floor. Read: this task is a gathering place/work in progress.


This parent task is intended to help gather and advance the thinking around how the visual editor might be enhanced to help people learn the social conventions (read: policies and guidelines) and exercise the judgement necessary to become productive and constructive Wikipedia editors.

Background

Visual editor's growing popularity among people new to editing Wikipedia [i] suggests it has been reasonably successful at helping people to learn the technical skills [ii] necessary to edit Wikipedia.

Trouble is, the edits these new people make often break/defy Wikipedia policies and guidelines.

This task is about exploring how the visual editor could be augmented/enhanced to help people learn these policies and guidelines and exercise the judgement necessary to become productive and constructive Wikipedia editors.

Potential impact

New and Junior Contributors
Working on talking pages has led us to notice that for many newcomers, the earliest human interactions they have on-wiki centers around something they did wrong, like not contributing in the Wikipedia Way [iii] [iv][v][vi][viii].

We wonder how newcomers perceive these interactions and further, whether newcomers having a positive first interaction with another Wikipedia editor could increase the likelihood that they continue editing. [vii]

Senior Contributors
We wonder whether adding more "productive friction" to publishing flows could free Senior Contributors up to do more high-value work by:

  • Reducing the amount of work they need to do reverting lower quality edits by increasing the quality of edits
  • Reducing the amount of work they need to do blocking bad faith/vandalistic editors by lowering the likelihood these actors are able to complete/publish the changes that cause them to be blocked
  • Reducing the amount of work and time they spend writing on new users' talk pages to alert them of policies and/or guidelines they've [likely] unknowingly broken
  • Reducing the amount of time and energy they need to spend worrying about protecting the wikis. This could potentially relieve them of mental space to address new challenges.

Use cases

More in T284465.

Components

  • A way to codify rules and policies that machines could "read" / "interpret"
    • To start, a team like Editing could hardcode these rules and policies on a one-off basis. At scale, if/when the concept proves valuable, we envision a way for volunteers to write custom rules based on the consensus reached at their respective projects.
  • A way for the editing interface to locate where content exists that violate these rules and policies
    • Note: "content" in this context refers to content that has already been published on the wiki and content that volunteers have added and have not yet published.
  • A way to surface improvement suggestions to volunteers who have an editing interface open.
    • E.g. "This is the issue. This is where the issue exists within the artifact (read: article, talk page comment you are drafting, etc.). This is what you can do to remedy the issue."
  • A way to make volunteers aware that issues exist within content they are reading / have not yet started to edit.

Also see the ===Components section in T276857.

Naming ideas

  • Policy Check
  • Intelligent edits / Edit intelligence
    • I'm attracted to the idea of framing this as infusing the collective intelligence of the wikis into the editing interfaces.
  • Edit assistance
  • Assisted edits
  • Augmented edits

References

Related tickets

Data

Policy documentation

Research

  • Proposed: AI-Models-For-Knowledge-Integrity
  • Automatically Labeling Low Quality Content on Wikipedia By Leveraging Patterns in Editing Behaviors via @Halfak in T265163#7622952 non paywall link
  • Conversations Gone Awry: Detecting Early Signs of Conversational Failure
  • Wikipedian Self-Governance in Action: Motivating the Policy Lens
  • Twitter Prompts Findings
    • "If prompted, 34% of people revised their initial reply or decided to not send their reply at all."
    • "After being prompted once, people composed, on average, 11% fewer offensive replies in the future."
    • "If prompted, people were less likely to receive offensive and harmful replies back."
  • Unreliable Guidelines: Reliable Sources and Marginalized Communities in French, English and Spanish Wikipedias
    • "A new editor wrote in an email that they perceived Wikipedia’s reliable source guidelines to have exclusionary features."
    • "...community consensus is a foundational pillar of the Wikimedia movement. We learned trainers see this process as privileging those who participated first in the development of the encyclopedia’s editorial back channels. As well, the participants in our community conversations were uncomfortable with the presumption that agreement is communicated through silence, which privileges those who have the time and feel comfortable speaking up and participating in editorial conversations."
    • "In English, contributors from English-speaking countries in Africa said their contributions often faced scrutiny. One organizer from an unnamed African country who participated in our session said when they hosted events, contributions were deleted en-mass for lacking reliability. This was demoralizing for the participants and required extra work by the trainers to stand up for their publications and citations, said one participant...To avoid new editors experiencing these disappointing responses, other trainers in English Wikipedia explained they would review sources before new editors begin editing."
    • "...the quantity of material that a trainer is required to parse in relation to reliable source guidelines is immense. As one participant said:"
      • "This bushy structure makes the guidelines pages unreadable. Who has the time and the meticulousness to read it completely without being lost to a certain point? It took me a decade to go through it [in French] and I must admit I’m not done yet!"

Ideas

  • Ideas for where/how we might introduce this feedback/interaction: T95500#6873217

Related on-wiki tools

On-wiki documentation

Related third-party tools

Open questions

  • 1. Responsibility: How much responsibility should the software lead people to think they have for "correcting" the issues within the content they're editing?
  • 2. Authority: How might this capability shift editors' perception of who is the authority on what's "best"? Might this tool cause people to listen the software more than they do fellow editors?
  • 3. Audibility: How will this tool adapt/cope with the fact that policies and guidelines are constantly evolving?
  • 4. Reverts: Might this capability be impactful for people whose edits have been reverted?

Note: I've added the above to T265163's newly-created ===Open questions section.


i. https://superset.wikimedia.org/r/345
ii. Where "technical skills" could mean: adding text, formatting text, adding links, adding citations, publishing edits, etc.
iii. https://www.mediawiki.org/wiki/New_Editor_Experiences#Research_findings
iv. https://w.wiki/eMc
v. https://w.wiki/dSx
vi. https://w.wiki/dSv
vii. Perhaps the Research Team's "Thanks" research might be instructive here for it explore how positive interactions/sentiment affect editing behavior: https://meta.wikimedia.org/wiki/Research:Understanding_thanks
viii. https://en.wikipedia.org/w/index.php?title=User_talk:Morgancurry0430&oldid=1038113933

Related Objects

View Standalone Graph
This task is connected to more than 200 other tasks. Only direct parents and subtasks are shown here. Use View Standalone Graph to show more of the graph.
StatusSubtypeAssignedTask
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenFeatureNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenJrbranaa
Resolvedppelberg
OpenFeatureNone
Resolvedppelberg
OpenFeatureNone
ResolvedMNeisler
OpenNone
OpenNone
DeclinedSpikenayoub
OpenSpikeNone
OpenNone
Resolvedppelberg
OpenSpikeNone
OpenSpikeNone
OpenSpikeNone
Resolvedppelberg
OpenNone
Resolvedppelberg
OpenNone
OpenSpikeNone
OpenNone
DuplicateNone
OpenNone
OpenNone
Resolvedppelberg
OpenNone
OpenNone
OpenNone
OpenLadsgroup
OpenFeatureNone
OpenNone
OpenNone
OpenNone
OpenNone
DuplicateNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
ppelberg updated the task description. (Show Details)
ppelberg updated the task description. (Show Details)