Page MenuHomePhabricator

Section-Level Images: leading indicators
Open, MediumPublic


User story

As the Growth team, I want to understand the impact of adding a new Section-Level structured task, because I want to know if we should continue to rollout this task, rollback, or make improvements.

As the Structured Data team, I want to understand the impact of adding a new Section-Level structured task, because I need to add details to the SDAW report (due September 1, 2023).

Not an A/B test

Similarly as we've done previously for Add a Link in T286816 and Article-level Add an Image in T311531, we're interested in understanding the effect of the section-level "add an Image" task. Unlike those other two tasks, we do not plan to run the section-level "add an image" task in an A/B test. There are a few reasons for choosing a different approach:

  • The task will be very similar to original "add an image" task, so results are likely to be similar.
  • We don't want to expose this as the first task suggested to newcomers, since we see it as a task that newcomers should "level up" to once they are familiar with the article-level "add an image" task.
  • Since this won't be a task provided to all newcomers, it would be a challenge to gather enough data to come to a statistically significant conclusion.
  • Due to the shift in focus for the upcoming Annual Plan, it's unlikely we will be able to devote time to iterating on improvements based on experiment findings. Instead we will try to iterate more quickly on feedback from user testers and Growth pilot wiki feedback.

Rather than running a lengthy A/B test, we will instead set Leading indicators prior to release and monitor these on an ongoing basis.

Leading indicators

We define the following leading indicators and plans for action:

  1. Task completion rate. We compare this to the completion rate of the article-level Add an Image task and we expect it to be roughly similar. If it is significantly lower, we run a funnel analysis to identify the drop-off point (unless it's the caption stage as described below).
  2. Caption stage bounce rate. Similar criteria as for the Task completion rate.
  3. Revert rates. Similar criteria as for the Task completion rate, but we also plan to monitor the revert rates for the copy edit and Add a Link tasks for comparison.

In addition, we plan to monitor the following measurements to get a sense of the adoption rate of the task:

  1. Number of users who have the task selected.
  2. Number of tasks initiated.
  3. Number of tasks completed.

Acceptance Criteria

Event Timeline

KStoller-WMF moved this task from Inbox to Triaged on the Growth-Team board.
KStoller-WMF renamed this task from Section-Level Images: experiment analysis to Section-Level Images: leading indicators.May 1 2023, 10:28 PM
KStoller-WMF assigned this task to nettrom_WMF.
KStoller-WMF updated the task description. (Show Details)
nettrom_WMF raised the priority of this task from High to Needs Triage.May 8 2023, 4:58 PM
nettrom_WMF added a project: Product-Analytics.

Adding the Product Analytics tag and resetting the priority so that Product Analytics can triage this tomorrow. Since this is planned for late May/early June the work needed on this at the moment is more of a planning nature, which to me fits into PA's "medium priority".

mpopov triaged this task as Medium priority.May 9 2023, 5:09 PM
mpopov moved this task from Triage to Current Quarter on the Product-Analytics board.

I've chatted with @cchen about what the Structured Data team needs so that we're not duplicating efforts, and learned that there isn't currently any overlap between us. Adding this comment mainly to document that the SD team will send out notifications about section-level images to experienced editors. If the Growth team at some point chooses to measure something like edits, we might want to make sure we take that into account.

I pulled data from Growth's KPI Grafana board from 2023-07-31 to 2023-08-28 (available here) for Section-Level and Article-Level suggestions. This timeframe was chosen because it should not be as much affected by the June/July slump in activity that we often see on the wikis. The end date is limited by the team shutting off image suggestions in late August (see T345188 for more information). This data range covers four whole weeks of data. While this dataset does not allow us to separate it by platform (desktop and mobile web), nor does it allow us more fine-grained user filtering, it was easily available and provides us with a reasonably good picture that's sufficient for this kind of analysis at this time.

We get the following table of activity for the two tasks based on this data:

Task typeTask clicksSaved editsRevertsTask completion rateRevert rate

We see that the task completion rate for section-level image suggestions is high, on par with Add a Link (ref) when that was released. This is likely because the section-level task is something users either choose themselves in the task selection dialogue, or choose to try out after being asked through the "Try a new task" dialogue that's part of Levelling Up. Those users are therefore likely already experienced editors and don't have too many issues with completing this task.

The revert rate for the section-level task is higher than the article-level task. We don't think this difference is cause for concern for two reasons. First, it might be harder to agree that an article is clearly improved by adding a section-level image compared to adding an article-level image. Secondly, articles suggested for section-level images already have a lead image, which might mean that they're also longer and have more contributors scrutinizing the edit.

We'll investigate the caption stage drop-off rate at a later stage, and at the same time see if there are key differences in task completion and revert rates based on platform.

@nettrom_WMF: Hi, the Due Date set for this open task passed a while ago.
Could you please either update or reset the Due Date (by clicking Edit Task), or set the status of this task to resolved in case this task is done? Thanks!