We have a way of measuring how long users stay on pages they searched for and we have QuickSurveys for getting user feedback. After we deploy a survey in T118800 and obtain some data, we need to figure out if & how their length of stay correlates to the self-reported satisfaction with the search results.
Description
| Status | Subtype | Assigned | Task | ||
|---|---|---|---|---|---|
| Declined | mpopov | T113240 Analyze qualitative user satisfaction data for search on-wiki | |||
| Declined | • JGirault | T118800 QuickSurveys: Add survey on article page when coming from a wiki search | |||
| Resolved | • MSyed | T118311 Write survey question | |||
| Resolved | • JGirault | T117831 Figure out if we can show users who have engaged with Wikipedia search a survey | |||
| Resolved | • MSyed | T118811 Coordinate with Legal, CA and community to roll out quick survey | |||
| Resolved | • JGirault | T119153 QuickSurveys: Add sessionId to Schema:QuickSurveysResponses | |||
| Resolved | • debt | T119149 QuickSurveys: Allow internal surveys to provide a custom privacy policy | |||
| Resolved | Ottomata | T119144 EventLogging sees too few distinct client IPs {oryx} [8 pts] | |||
| Resolved | • bmansurov | T119152 Spike: [2 hours] QuickSurvey schema did not create a table in log database from first survey | |||
| Declined | None | T127119 Sign-off required: discuss "quicksurvey" query parameter, cache busting implication, how to load a specific survey on a wiki page |
Event Timeline
P.S. We also need to figure out alternative metrics we can use temporarily in the case that this one takes too long to validate.
Oliver and I briefly discussed the possibility of using the median lethal dose ("LD50") which is the point at which we've lost half of our population. So if that number goes up, it means the users are staying on the pages longer. This is NOT necessarily indicative of the quality of our results since we may make the system yield worse results and so LD50 goes up because suddenly users are spending more time on pages looking for content they want, figuring out why the heck we gave them that page. So if we DO end up going on with this or something similar, we would need to build in safeguards against such situations.
Results of the meeting: we have agreed to not hack together a solution that will get us in trouble with the community. We are waiting until we have a surveying solution in place. One possible solution is Readership's survey system.
Moving out of the sprint since it's dependent on unscheduled, unspecified work from other teams.