A lot of Audiences teams' FY18-19 annual plans rely on editing metrics (e.g. new editor acquisition, retention of new/existing editors, edit revert rates). In cases where possible, those teams would like the ability to do A/B tests of new features meant to change those metrics to evaluate their efficacy before releasing them.
So while we have prior work, methodologies, and technologies in place for doing experiments of changes to reading experiences (e.g. Page Previews), we currently lack the infrastructure and data pipeline for:
- sampling contributors to enable features at random and on specific platforms, perhaps in cohorts
- tracking which contributors are using what experimental feature
- tracking which contributions were made with an experimental feature
Therefore, this might very well be a multi-team/departmental endeavor.