Lets look at the data we've collected so far for AaLD. I'm open to any results or insights but some specific questions to see if we can answer yet:
- Were there any meaningful differences in trust scores between the experimental and control group? Between users who saw the preview card and the full timeline?
- Were there any meaningful differences in trust scores between the "baseline" users from the earlier fall survey and the experiment groups?
- What parts of the feature were clicked through on or interacted with the most?
- How far down did people scroll in the timeline?
- How many thanks were generated?
- Any data indication of bugs or major technical issues (ie. missing data, an event happening way less than expected, etc)?
- Did the type of article (evergreen vs controversial) impact user interaction or survey results?
- Was there any change over time in the scores (ie over the experiment reported trust went down)?
- Which articles generated the most views? Interactions?