Page MenuHomePhabricator

[EPIC] Refinements to Instrument Catalog View
Open, LowPublic

Description

Background/Goal

Now that we have functional prototype of the instrument configuration user interface T331514: [Goal] M1: Metrics Platform: Control Plane: Analytics instrumentation stream management UI, T360731: [Epic] MP Instrument Configurator Frontend, T360707: [Epic] MP Instrument Configurator Backend, we can begin to solve some remaining open questions about the instrument catalog T360747: Build the list/catalog view of the MPIC.

This work support the goals outlined in the FY 24/25 annual plan:

SDS Objective 2: Product managers can quickly, easily, and confidently evaluate the impacts of product features.

Key Result 2.1: By the end of Q2, we can support 1 product team to evaluate a feature or product via basic split A/B testing that reduces their time to logged-in user interaction data by 50%.

KR/Hypothesis(Initiative)

This work began in Q2 of 23/24 with [[ https://app.asana.com/0/1206185374459780/1206185485422405 | [Hypothesis] SDS2.5.3 UX flow and UI Prototype for instrumentation configuration and orchestration ]]

If we create a UX flow and UI Prototype for instrumentation configuration and orchestration, we will ensure scope alignment across stakeholders and establish user centered requirements that will inform the build/install/buy analysis.

and continued into implementation of a functional prototype in Q3 [[ https://app.asana.com/0/1206789271453386/1206789149149051 | [Hypothesis] SDS2.5.5 Build a service for Instrument Configuration ]]

If we build a service for instrument configuration, we can deliver a prototype that is flexible enough to scale in order to integrate with our future experimentation flagging solution.

and will continue into FY 24/25 under [[ https://app.asana.com/0/1207634177379976/1207634432512644 | [Hypothesis] SDS2.1.4 Usability testing ]]

If we conduct usability testing on our prototype among pilot users of our experimentation process, we can identify and prioritize the primary pain points faced by product managers and other stakeholders in setting up and analyzing experiments independently. This understanding will lead to the refinement of our tools, enhancing their efficiency and impact.

Open Questions

  • what are best practices for documentation of instruments and experiments? Do we need a per instrument / experiment knowledge hub that captures everything from measurement design, specs, runtime metadata, performance monitoring and analytics results?
  • Should there be a status that indicates that the p-value has been low enough for a certain amount of time to review the results?
    • should progress be measured by sample size reach
    • should progress be measured based on duration in terms of runtime?
  • should you be able to not only turn off but delete (or just archive) an instrument from this table?
    • what does delete mean? remove from this catalog list, remove stream configuration delete the metadata about the test, delete the data collected, delete the table?

Success metrics

  • all open questions are answered or intentionally deferred prior to beta release.

In scope

  • questions documented here
  • questions that arise from user testing the alpha release (aka functional prototype)

Out of Scope

  • questions that arise after beta release

Artifacts & Resources

Figma file
Slack conversation