Page MenuHomePhabricator

Create a system to encode best practices into editing experiences
Open, Needs TriagePublic

Description

🌱In the forest that is Phabricator, this ticket is very much a seedling on the forest floor. Read: this task is a gathering place/work in progress.


This parent task is intended to help gather and advance the thinking around how the visual editor might be enhanced to help people learn the social conventions (read: policies and guidelines) and exercise the judgement necessary to become productive and constructive Wikipedia editors.

Background

Visual editor's growing popularity among people new to editing Wikipedia [i] suggests it has been reasonably successful at helping people to learn the technical skills [ii] necessary to edit Wikipedia.

Trouble is, the edits these new people make often break/defy Wikipedia policies and guidelines.

This task is about exploring how the visual editor could be augmented/enhanced to help people learn these policies and guidelines and exercise the judgement necessary to become productive and constructive Wikipedia editors.

Potential impact

New and Junior Contributors
Working on talking pages has led us to notice that for many newcomers, the earliest human interactions they have on-wiki centers around something they did wrong, like not contributing in the Wikipedia Way [iii] [iv][v][vi][viii].

We wonder how newcomers perceive these interactions and further, whether newcomers having a positive first interaction with another Wikipedia editor could increase the likelihood that they continue editing. [vii]

Senior Contributors
We wonder whether adding more "productive friction" to publishing flows could free Senior Contributors up to do more high-value work by:

  • Reducing the amount of work they need to do reverting lower quality edits by increasing the quality of edits
  • Reducing the amount of work they need to do blocking bad faith/vandalistic editors by lowering the likelihood these actors are able to complete/publish the changes that cause them to be blocked
  • Reducing the amount of work and time they spend writing on new users' talk pages to alert them of policies and/or guidelines they've [likely] unknowingly broken

Use cases

More in T284465.

Naming ideas

  • Policy Check
  • Intelligent edits / Edit intelligence
    • I'm attracted to the idea of framing this as infusing the collective intelligence of the wikis into the editing interfaces.
  • Edit assistance
  • Assisted edits
  • Augmented edits

References

Data

Policy documentation

Research

  • Conversations Gone Awry: Detecting Early Signs of Conversational Failure
  • Wikipedian Self-Governance in Action: Motivating the Policy Lens
  • Twitter Prompts Findings
    • "If prompted, 34% of people revised their initial reply or decided to not send their reply at all."
    • "After being prompted once, people composed, on average, 11% fewer offensive replies in the future."
    • "If prompted, people were less likely to receive offensive and harmful replies back."
  • Unreliable Guidelines: Reliable Sources and Marginalized Communities in French, English and Spanish Wikipedias
    • "A new editor wrote in an email that they perceived Wikipedia’s reliable source guidelines to have exclusionary features."
    • "...community consensus is a foundational pillar of the Wikimedia movement. We learned trainers see this process as privileging those who participated first in the development of the encyclopedia’s editorial back channels. As well, the participants in our community conversations were uncomfortable with the presumption that agreement is communicated through silence, which privileges those who have the time and feel comfortable speaking up and participating in editorial conversations."
    • "In English, contributors from English-speaking countries in Africa said their contributions often faced scrutiny. One organizer from an unnamed African country who participated in our session said when they hosted events, contributions were deleted en-mass for lacking reliability. This was demoralizing for the participants and required extra work by the trainers to stand up for their publications and citations, said one participant...To avoid new editors experiencing these disappointing responses, other trainers in English Wikipedia explained they would review sources before new editors begin editing."
    • "...the quantity of material that a trainer is required to parse in relation to reliable source guidelines is immense. As one participant said:"
      • "This bushy structure makes the guidelines pages unreadable. Who has the time and the meticulousness to read it completely without being lost to a certain point? It took me a decade to go through it [in French] and I must admit I’m not done yet!"

Ideas

  • Ideas for where/how we might introduce this feedback/interaction: T95500#6873217

Related on-wiki tools

Related third-party tools

Open questions

  • 1. Responsibility: How much responsibility should the software lead people to think they have for "correcting" the issues within the content they're editing?
  • 2. Authority: How might this capability shift editors' perception of who is the authority on what's "best"? Might this tool cause people to listen the software more than they do fellow editors?
  • 3. Audibility: How will this tool adapt/cope with the fact that policies and guidelines are constantly evolving?
  • 4. Reverts: Might this capability be impactful for people whose edits have been reverted?

Note: I've added the above to T265163's newly-created ===Open questions section.


i. https://superset.wikimedia.org/r/345
ii. Where "technical skills" could mean: adding text, formatting text, adding links, adding citations, publishing edits, etc.
iii. https://www.mediawiki.org/wiki/New_Editor_Experiences#Research_findings
iv. https://w.wiki/eMc
v. https://w.wiki/dSx
vi. https://w.wiki/dSv
vii. Perhaps the Research Team's "Thanks" research might be instructive here for it explore how positive interactions/sentiment affect editing behavior: https://meta.wikimedia.org/wiki/Research:Understanding_thanks
viii. https://en.wikipedia.org/w/index.php?title=User_talk:Morgancurry0430&oldid=1038113933

Related Objects

Event Timeline

ppelberg renamed this task from Incorporate policies and guidelines into the editing experience to Use VE to teach newcomers good judgement.Oct 9 2020, 7:19 PM
ppelberg updated the task description. (Show Details)
ppelberg updated the task description. (Show Details)
ppelberg renamed this task from Use VE to teach newcomers good judgement to Turn VE into a Wikipedia Editor.Mar 31 2021, 4:00 PM
ppelberg added a subscriber: Pginer-WMF.

Task description update

  • ADDED links to two related tools to the task description [i] that @Pginer-WMF shared.

i. https://languagetool.org/ and https://hemingwayapp.com/

ppelberg renamed this task from Turn VE into a Wikipedia Editor to Create a system to express policies and guidelines into editing experiences.Apr 2 2021, 10:41 PM
ppelberg updated the task description. (Show Details)

Task description update

ppelberg updated the task description. (Show Details)
ppelberg renamed this task from Create a system to express policies and guidelines into editing experiences to Create a system to encode best practices into editing experiences.Apr 6 2021, 4:30 PM

I wanted to share one example that illustrates the need for this and some ideas of aspects where this can be useful.

In Content Translation we wanted to anticipate the issues that users may face when trying to publish their translations because of abuse/edit filters. The reason for that it was hard fro translators to find where in the content was the problem that was triggering the abuse filter. However, the current way filters are implemented does not provide feedback about where the issues are, and checking the content on a paragraph basis is not always reliable and increases maintenance costs as was reported in T266380: Remove ContentTranslation code that emulates AbuseFilter, because it's hard to maintain. Having a more clean solution for flagging issues contextually in real-time would avoid these issues.

Here are also some examples of the issues that a system like this could help to surface (in bold those related to translation, we experienced more directly in Content Translation):

  • Lack of sources after adding a paragraph, to encourage verifiability.
  • Lack of sources in the local language of the wiki. The system could encourage to add an alternative source in the local language of the wiki to facilitate verifiability to most reviewers.
  • Propose an alternative terms to keep consistency within the article or across articles. When translating users often can translate a word or expression in multiple ways and they need to check which is the term used in similar articles or other parts of the article to keep it consistent, or check a separate glossary defined elsewhere, which requires additional effort and breaks the user workflow.
    • Encourage a consistent language variant. A particular case of the above scenario is when multiple variants of a language coexist with some rules of coordination. For example, English Wikipedia allows both British and American English but requires the whole article to be written with the same variant.
  • Encourage civic language on talk page replies. For example, suggesting to ask for references instead of claiming that something is a lie.
  • Encourage a more elaborate response in talk pages. For example, when sharing a link to a policy, propose to include a two line description of it or how it applies to the particular case.

Task description update

  • ADDED a link to the results of Twitter's recent "prompts" experiment via @Sadads. This supports what @Pginer-WMF mentioned in T265163#7023128:
    • "Encourage civic language on talk page replies. For example, suggesting to ask for references instead of claiming that something is a lie."

Task description update

  • ADDED a link to @ValeJappo's BOTutor to the task description's Related on-wiki tools. Thank you, @MMiller_WMF for drawing our attention to this.
    • For context, this bot, as @ValeJappo describes in T255037#7126416 , "...detects and then warns users after they make common errors."

cc @OTichonova

T225750 might live under this task somewhere

ppelberg updated the task description. (Show Details)
  • ADDED the work @NRodriguez and the Community-Tech team are doing to discourage editors from linking to disambiguation pages to the task description's === Use cases section
ppelberg updated the task description. (Show Details)
  • ADDED the proof of concept @NRodriguez and the Community-Tech team are developing to make people aware, in real-time, when they've added a link to a disambiguation page: T288589.