Page MenuHomePhabricator

Configure analytics dashboard for Reading List A/B test
Closed, ResolvedPublic

Description

This is a placeholder task for Data Analytics to help set up dashboard for Reading List desktop A/B test

(Is this already a dashboard that has been set up? https://superset.wikimedia.org/superset/dashboard/p/b9LvJ777BmN/ )

Instrumentation spec

Instrumentation tickets can be found attached to the epic https://phabricator.wikimedia.org/T402210

Acceptance criteria

  • Analytics dashboards are set up
    • Defined metrics in metrics_catalog.yaml
    • Experiment registered with defined metrics in experiments_registry.yaml.

Event Timeline

Status update:
Dashboards for both tier runs have been enabled with dummy metrics.
Here are the dashboard links

Snapshot

image.png (1×2 px, 324 KB)

Merge request
Snapshot

image.png (1×2 px, 344 KB)

Note
The following metrics were originally planned to be enabled in the xlab dashboard, but will be moved to notebook analysis.

  • Internal referral metric. Since we discovered that the feature was deployed on mobile web without instrumentation, we plan to shift the analysis window to start from the date when instrumentation is live on mobile.
  • Retention rate metrics. The instrumentation did not adopt the recommended event name page-visited, which has already been configured in metrics_catalog.yaml. To avoid duplicating the metrics sets, recommend analyzing them in notebook.

Oh no! Any idea why we deployed on mobile without instrumentation? Or how we missed the page-visited event name? I'm thinking more about refining our process so as not to miss these in the future.

@JVanderhoop-WMF this was a result of a number of factors that came to an unfortunate combination. The team underwent a reorg while transitioning reading list from an intern project to a full productized build, and the mobile web version was already built and enabled by default. While we were determining the parameters of the experiment, Jennifer advised us to focus on one surface at a time, so I made the call to not release mobile web simultaneously. I was under the impression that the mobile version hadn't been built yet, when actually, what we needed to do was to disable the mobile web feature that was already bundled with the desktop. Also, this particular experiment approach was difficult for us to QA, since the reading list is enabled on Testwiki and Beta cluster as a Beta feature you can turn on, but on those environments, we are also working on building the mobile web version of the experience (so if we tried to QA the "absence" of mobile feature, we wouldn't be able to do so on those 2 environments). I am still trying to figure out how to QA something like this in production, if we can somehow insert one of our production accounts into the experiment bucket (without screwing with the collected data).

As for missing one of the data params, this was on me. A lot of time passed between the initial instrumentation spec was completed by the analyst and we spent that time completely revamping how to approach the experiment bucketing and simplifying UI elements required for the experiment. Even as a brand new PM to the team, I should have been more diligent about ensuring we revisited and re-aligned on the specs before creating our instrumentation phab tickets.

Hopefully this kind of "perfect storm" of factors wouldn't happen again in the future, but we are working on additional team processes that can help prevent this, such as defining a specific role on the team for instrumentation owner whenever we work on a new feature, and aligning on what we should use as source of truth for specs so there's no need to hunt for the most updated version across phab and sheets.

Thanks for the context @HFan-WMF ! Totally understandable that it happened, and I think that stepping back to ask "how might we have caught this" is always a good step in trying to prevent it in the future. Lots of unique elements on this one, so no real golden solution.