Context
To build an effective experimentation platform (Experimentation Lab), we first need to understand how our teams currently measure success. This requires a thorough audit of all metrics, measurement tools, and data collection methods—both current and historical. By examining teams' existing behaviors and requirements, we'll identify where standardization is needed and what analytical capabilities are truly necessary.
Teams are unlikely to adopt new methods if they don't align with their current practices or if the tools don't support a unified approach to experimentation. Understanding these dynamics will help us prioritize which features to build into the Experimentation Lab.
Through this assessment, we'll discover the essential metrics and capabilities needed to foster a shared culture of experimentation, while ensuring we don't build more sophisticated analytical tools than teams can effectively use.
Goal
In order to drive innovation with timely data-driven decision making and advance our annual plan SDS 2 Objective we will conduct a thorough audit of existing metrics, instrumentation, and team practices to inform the development of a unified experimentation framework that teams will readily adopt.
Tasks
(will need subtasks for each item)
- Audit Current Metrics
- Compile list of all metrics from Annual Plan Process (APP)
- Document each metric's:
- Definition and calculation method
- Data sources and dependencies *Teams using the metric
- Usage frequency and purpose (planning, KPIs, daily, weekly, monthly, etc.)
- Code/query references
- Industry standard vs. custom metrics
- Current importance rating for the teams using it and for the business operations
- Instrument & Data Source Analysis
- Map all existing instrumentation in production, include:
- Schema and repository location
- Data volume and growth trends
- Maintenance status (active/passive)
- Known quality issues
- Risk levels and SLOs
- Access controls
- Retention policies
- Update frequency (Real-time data collection, Daily batch processing ,Weekly aggregations ,Monthly reporting cycles ,Event-driven updates (triggered by specific actions) ,Historical changes to update frequency, Impact on data freshness requirements)
- Associated metadata
- Who owns/maintains the collection
- When it was implemented
- Last modification date
- Planned sunset date (if any)
- Schema versions
- Data validation rules
- Processing pipeline (if any)
- Error handling procedures
- Dependencies on other systems
- Purpose for data collection (primary metric, secondary metric, etc)
- related experiments or features
- Any additional teams who rely on this data
- Organizational critical level
- Privacy / security classifications
- Associated Dashboards
- Associated queries or reports
- Known downstream dependencies
- Access patterns (how to people use this data? in what form? through which tools)
- Map all existing instrumentation in production, include:
- Documentation Review
- audit measurement plans
- Review event gate streams
- map code locations
- identify "shadow instrumentation"
- Review L3SC assessments
- Gap analysis
- Identify metrics without proper definitions
- List unreliable or missing data sources
- Document costly/difficult computations
- Map data retention needs that extend beyond 90 days
- List unfulfilled measurement needs
- Assess potential differential privacy requirements
- Recommendations
- Propose metrics for standardization
- Identify instruments for sunsetting
- Outline data source optimization opportunities
- Define requirements for experimentation lab features
- Document adoption risk mitigation strategies
Timeline
Completion by end of January 2025