Page MenuHomePhabricator

[EPIC] Infrastructure for interventions impacting editing metrics
Open, NormalPublic

Description

A lot of Audiences teams' FY18-19 annual plans rely on editing metrics (e.g. new editor acquisition, retention of new/existing editors, edit revert rates). In cases where possible, those teams would like the ability to do A/B tests of new features meant to change those metrics to evaluate their efficacy before releasing them.

So while we have prior work, methodologies, and technologies in place for doing experiments of changes to reading experiences (e.g. Page Previews), we currently lack the infrastructure and data pipeline for:

  • sampling contributors to enable features at random and on specific platforms, perhaps in cohorts
  • tracking which contributors are using what experimental feature
  • tracking which contributions were made with an experimental feature

Therefore, this might very well be a multi-team/departmental endeavor.

Event Timeline

mpopov created this task.Oct 26 2018, 7:06 PM

To clarify just in case: A/B testing of editing features and/or the sampling of contributors has been done before, e.g. by the old Growth team (example) or in the Teahouse study. In some sense it is actually easier than for readers because (logged-in) editors have persistent, public user IDs. I understand the current task is about building a more general, easy to use infrastructure.

nettrom_WMF moved this task from Triage to Backlog on the Product-Analytics board.Nov 9 2018, 7:41 PM
kzimmerman triaged this task as Normal priority.Jun 28 2019, 6:34 PM
kzimmerman added a project: Better Use Of Data.
kzimmerman added a subscriber: kzimmerman.

This falls within the scope of work we have planned for FY 19-20, tagging with Better Use of Data. It may be adjusted/consolidated later.