Page MenuHomePhabricator

Figure out how to measure the impact of edit review improvements on the new user experience
Closed, DeclinedPublic

Description

The Collaboration Team is working on a project in the area of edit-review, designed to have an effect on new-user retention. To understand this problem better and determine a baseline for measuring progress, we would like to know more about the level at which new users do or don't continue to contribute to the wikis.

@Halfak can probably help with definitions of such questions as what constitutes a new user, what time period can be most usefully defined as retention, etc. But I'd think we'd want to see the drop off as a percentage after one month, two months, three months, etc., and how that changes over time. Most useful would be to have a graph that runs and we can continue to reference.

Neil has indicated that this type of measurement can be computationally intensive. If anyone can suggest tricks to assist with this, please do.

Event Timeline

In addition to the general metric, we may want to be able to measure it for a particular group of users: those that got reviewed by reaching to them through the improved tools provided.

The "classic" new user retention metric is "returning new editor" – i.e., someone who creates an account, makes 5+ edits to that wiki that month (so one of that month's "new editors"), then the next month makes 5+ edits to that wiki as well.

Based on my entirely non-scientific anecdata of how long it seems users stick around if they do so at all, this feels pretty short-term, but I'm not sure we have very hard data on how accurate my feeling is.

In this study new editor survival seems to be defined as "make at least one edit 2-6 months after the first edit session".

The problem with long-term measurements is that it takes time to get the effects of a change (i.e., waiting for 6 months until you see the effect of a change you made today).

Ideally we would like to have a measurement that works for the day to day operations in the short term but is representative of the long term trends, but if that is not possible we may need to work with two different measurements.

And what about asking people directly with regular micro-surveys?

@Jdforrester-WMF's summary of returning new editor wasn't quite right. This metric measures the proportion of editors who return for at least a second edit session. This metric is useful for looking at whether someone found the first experience (post registration) rewarding enough to repeat it. With every returning session, the "hazard of churn" decreases predictably.

In my past work looking at the effects of the Teahouse on newcomer retention, we were able to see the largest effects when looking at the rate at which editors continued editing between 2-6 months after registration.

Generally, I think that we should tailor our retention and engagement metrics for the experiences we want to understand. E.g. if we think that we're going to most-likely effect the likelihood that a newcomer will at least return after their first editing activities, then "returning new editor" is a good metric. If we are looking to affect how newcomers integrate into Wikipedia's social structures, then I think we'll want something more specific to that dynamic. I like to develop new metrics using the intersection of quant and qual methods. E.g. in order to know whether newcomers feel more "welcome" in Wikipedia, we'll probably need to ask them (micro-survey is one option), but we might find that "welcomed newcomers" engage in different behaviors that we can measure (e.g. edits to Talk and User_talk pages).

Agree with @Halfak. If the 'treatment' in this case is that the editor received some sort of message from an experienced Wikipedian who reached out to them via the feed, then it would be appropriate to use a study setup similar to that of the Teahouse long term newcomer retention study, perhaps with some adjusted parameters (different time buckets, edits to different namespaces, etc) based on our best guess of the nature, strength, and duration of the effect produced by the treatment.

jmatazzoni renamed this task from Measure New-User Retention to Figure out how to Measure New-User Retention.Jan 5 2017, 10:21 PM
nshahquinn-wmf renamed this task from Figure out how to Measure New-User Retention to Figure out how to measure the impact of edit review improvements on the new user experience.Feb 7 2017, 10:21 PM

@Deskana @Neil_P._Quinn_WMF @jmatazzoni I see that this is being de-prioritized. I understand that y'all are busy and that impact analysis is often seen as an unnecessary extra step, especially when the product seems to be "working fine", but I think in this case it would be really useful to perform such an analysis. If Neil or some other analyst wants to take this on, I can collaborate with that person on the study design.

@Capt_Swing I agree, this would definitely be a useful piece of research. I don't think I can take this on right now, but I've at least promoted it on our workboard so that it stays on my mind. Thank you very much for offering your support; if I do get a chance to do this, I'll definitely take you up on it.

Awesome, @Neil_P._Quinn_WMF cheers to you for taking a user-centered approach to building WMF products ;) I should make you a barnstar.

Yes, cheers. But we still don't have an analyst. We know anecdotally that most people using the Newcomers filter are doing it so they can shoot newcomers down. But an easy thing to do would be to look at who is using Newcomers in combination with filters that might indicate a different intent: e.g., Newcomers + Very likely good. From that it would b hard to DEMONSTRATE an impact. But knowing that people were using it with that intent would be interesting.

nshahquinn-wmf added subscribers: Nettrom, JKatzWMF.

Unfortunately, I didn't have a chance to do this, and I think the window of interest has passed. Sorry, @Capt_Swing! Measuring impact is crucial, but particularly last year, our capacity to do it was very low.

So far this year, we're taking a much more proactive approach to this in Audiences—helped by the fact that we have a centralized Product Analytics team, a great manager in @JKatzWMF, and another skilled analyst arriving very soon (*cough* @Nettrom). Hopefully, we deliver and future projects won't leave important questions like this unanswered 🤞

Understood, @Neil_P._Quinn_WMF . Hopefully y'all can move in this direction once the new team is complete ;) Let me know...