Background
Quoting myself (@phuedx):
We want to start nudging experiment implementors towards declarative
analytics instrumentation as much as possible. Part of our approach
should be to provide pre-canned, standardized events and analytics
instruments for the implementor to use whenever possible.
We actually want to start nudging experiment implementors towards thinking about metrics rather than primitives like streams, events, and queries. Wherever possible, the "pre-canned, standardized events and analytics instruments" that are mentioned above should be higher-level metrics, e.g.
- Pageview
- ReaderRetention
- ClickthroughRate
- SessionLength
- SessionDepth
- ErrorCount
- …
After a little back and forth, we've come up with an initial API and an initial list of events and instruments along with their specs https://docs.google.com/document/d/1SY8MK5Rrqwnc_GOtFvrD8ueSVc88nlkCw5mtrLO5jBc/edit.
Proposed Experiment Developer API
const { Pageview, DelayedPageview, SessionLength, ClickthroughRate } = mw.xLab.metric; const e = mw.xLab.getExperiment( 'my-awesome-experiment' ); // Send an action=page_visit event for the experiment e.send( Pageview ); // Send an action=delayed_page_visit,action_context=1100 event for the experiment e.send( DelayedPageview( 1100 ) ); // Start sending action=tick events for the experiment e.send( SessionLength ); // --- const ctr = e.start( ClickthroughRate( 'my-awesome-button' ) ); const buttonElement = doRenderButton(); // Send an action=impression,action_source=my-awesome-button event for the experiment ctr.impression(); // When the button is clicked, send an action=click,action_source=my-awesome-button event for the experiment buttonElement.addEventListener( 'click', () => ctr.click() );
The API is designed to ease experiment implementation by being minimal, consistent, and intuitive during development and during review. The API also separates details about experiment enrollment sampling (i.e. whether the user is enrolled in the experiment) from the event/metric implementation in such a way to make it easy to implement performant events/metrics without repetition.
The performance of an event/metric is especially important. It should be difficult to introduce performance regressions if the user is enrolled in an experiment.
Proposed Metric Developer API
At its most basic, a metric is a function that accepts an instance of Experiment and uses Experiment#send() to send events for the experiment, e.g.
Pageview is a simple metric that sends an action=page-visit event for the experiment immediately:
/** * @param {Experiment} e */ function Pageview( e ) { e.send( 'page_visit', { metric_name: 'Pageviews' } ); }
DelayedPageview is a more complicated metric that sends an action=page-visit event for the experiment after a delay:
function DelayedPageview( timeout ) { return ( e ) => { setTimeout( () => { e.send( 'delayed_page_visit', { metric_name: 'DelayedPageviews' } ); }, timeout }; };