Task is to document data instrumentation for ABC test events in new stream/schema based on app_interaction.
How will we know we were successful
Validation
10% of unique users that engage with experiment do so more than once in a 30 day period
5% higher internal referral clicks of recommended articles from people that engage with the experiment compared to those that did not engage with feature
5% higher clicks on suggestion based on search queries vs. click throughs to articles from suggested reading lists
5% higher clicks to view suggested reading list vs users that hit enter on search queries
65% or higher of feedback scores are positive
Guardrails
60% of feedback from in-app users are negative
10 community members from target group express negative sentiments
Curiosities
Do we see a difference in the retention rate for logged in users vs logged out users
How does the metrics from Rabbit hole compare to Recommended Content
Do we see higher pageviews for users that engage with the feature vs those that do not
Must Haves
- Run as ABC test
- A group is the control
- B group sees a recommended search query from article view in the search
- C group receives dialog encouraging them to see and save their recommended reading list
- Interface should be clear that recommendations are based on user interest
- Constrained to Sub Saharan Africa and South Asia
- User able to provide feedback about the quality of recommendations in-app
- Recommendations should pull from Categories, Topics or MoreLike. When doing instrumentation, there must be a way for us to know which selection a user made was from which API.
- After 20 days the experiment should be removed from the app
Nice to Haves
- Display summaries of articles in search and reading list interface
Target Quant Regions and Languages
South Asia & Sub Saharan Africa
User Testing Languages
- English
- Hindi
- French
- Arabic
User Testing Considerations
- Impact for screenreaders
- Impact for RtL readers
- Preferences based on Age