Page MenuHomePhabricator

Paper submission on early indicators of toxic conversations
Closed, ResolvedPublic

Description

Conduct a study on the relation between conversation structure and toxicity

  • Does selecting by toxicity successfully select conversations gone bad?
  • How much does context matter when people evaluate the toxicity of a comment?
    • Topic (page & time)
    • Conversation length & timing
    • Previous comments
  • How good are human annotators at predicting if a conversation will go bad?
    • Can we train a model to help with this task directly?

Collaborators

  • Nithum Thain
  • Lucas Dixon
  • Yiqing Hua
  • Cristian Danescu-Niculescu-Mizil

In Q2 we aim to:

  • finalize the labeling schema
  • generate human labels for comments and conversations and analyze them
  • train a model to answer the above questions
  • submit a paper (moving the release of assets from this research to a separate task: human/machine labeled data, code, on-wiki report, preprint)

If the paper is accepted (acceptance notification is in December), we're planning to write a story on the Wikimedia Blog (and potentially the Jigsaw blog), referencing the above assets.

Event Timeline

Currently conducting a dry run of the human labeling scheme with @Nithum on Crowdflower.

DarTar renamed this task from Research on early indicators of toxic conversations to Paper submission on early indicators of toxic conversations.Nov 1 2017, 6:36 PM
DarTar updated the task description. (Show Details)
DarTar moved this task from In Progress to Done (current quarter) on the Research board.