Page MenuHomePhabricator

GOAL: Run our first survey campaign
Closed, ResolvedPublic

Description

Copied across from https://phabricator.wikimedia.org/T107592#1598337
Let's edit the description until we have clear concrete survey to run.

First Survey:
The impetus behind this project was that in Q1 planning, when we did the walk-throughs (with JOH playing an excellent old man) it was obvious that we all had different assumptions about what the relative weight of our primary use cases were. As someone pointed out on an unrelated thread this week, knowledge of the present can reinforce existing use cases and miss opportunities, but I believe it is a necessary first step towards identifying the usage patterns we need to protect and even improve, while we are exploring any new opportunities:

So. I want to end up with a % by platform (and ideally project and even country) of what are users are coming to WP for.

  • learn what something is (summary/definition, e.g. what is strep throat)
  • lookup a specific fact (e.g. "what treatments are used for strep throat")
  • learn about a subject (e.g. "I need to browse around early american union activity")
  • deep analysis of a specific topic (e.g. "I want to know everything about the rock dove")
  • just killing time
  • other _____

I want to end up a dataset from which I can derive:

Desktop, En, US, 60% learn what something is
Mobile, En, US, 10% lookup a specific fact
etc.

Thoughts on whether or not that goal is stupid or how it can be improved?

How we get that is up to Joaquin and Anne, with support from Design Research (to the extent that it is helpful). It might be that we ask each user about each option, but it also might be that we just ask one at a time in buckets and compare the "yes"/"No" ratio.

My only concern with the current plan is that the in-article embedding means that certain kinds of articles will surface the survey sooner or later (depending on lead paragraph length). We need to think seriously about how to correct that bias. A top-of-article (below banner) or overlay are the only options I can think of.

Event Timeline

So it seems like this is the kind of survey you want to run:

Please help make Wikipedia better by letting us know your motivations so we can better serve your needs. What is the purpose for your visit to Wikipedia today?

  • This would have multiple choice answers and suggests we need to change our approach so that we can support answers other than yes/no/maybe.
  • Your question also suggests that we might want to give examples (to avoid having large unclear buttons (for example we might want to show a question mark that when clicked says e.g. what is strepthroat?)
  • You mention 'other' might be an answer - are you expecting a user to enter their own text for that? (this also hasn't been thought about)

Hey @JKatzWMF, sounds like a reasonable starting goal. Your goal sounds like "To learn whether user's motivations differ based on platform, country, and project when accessing Wikimedia Projects". Is this correct?

Other goal ideas:

  • To learn satisfaction level across platforms with reading experience (but not content); which can be used as a baseline for future software changes
  • To learn how users accessed Wikipedia/Wikimedia article (e.g. google, yahoo, etc.)

Other considerations:

  • Will this just be for English Wikipedia?
  • Having yes/no questions might be helpful later on. First you need to determine the right response option list using focus groups and survey testing.
  • We have a ton of readers; you can use A/B testing to figure out what question structures bring in the most responses (e.g. Should we include "Please help Wikipedia by answering this question" or should we just ask the question?), and you can use testing and focus groups to help you determine the right response options.
  • The examples (e.g. what is strep throat) should be provided to all respondents. The goal of providing examples is to help clarify definitions and everyone should be exposed to this equally or answers will be skewed. Typically best to avoid definitions or "e.g." altogether.
  • If you want to run the same survey again in the future, be sure to gather a large enough sample to have statistical power to test changes from one survey to the next.

Let me know if you need help/guidance with question review. My role is to support survey design for other teams so I'm happy to help.

Jdlrobson set Security to None.

@Jdlrobson
yes, multiple choice is probably the best way to do this. another, fuzzier, way would be to ask each question separately to 1/6 of the population. So 1/6 get's "are you here to learn a definition", and another 1/6th gets "are you here to learn a specific fact"...based on the relative number of yes/no, we could potentially glean relative proportions. my guess, however, is that this would not be valid as showing the various answers at one time adds much-needed context.

for "other" we could leave it as "other" without free form text for now. If we get a large % of "other" we can go about exploring how to get details.

@egalvezwmf thanks for offering to help! To answer your specific questions, eventually this would include other projects, but it is not a blocker.

We should talk about the best approach. Asking questions such as "Are you here to look something up quickly?" Yes/No can easily be done with current architecture. Multiple choice would probably require more work, so I would prefer we run campaigns in the a manner more similar to the former! :)

@Jdlrobson because something is easier to do does not mean the data will be accurate or useful. I would consult with researchers about making sure the results you get with yes/no questions are useful. On the flip side, the question part itself needs to be well designed. As an example "quickly" could mean any period of time depending on the user.

Is anyone working on documentation for best practices in the design process for one-question surveys? Would be good to start building documentation so if people are interested in running this kind of survey, they can see what is the scope of work involved.

@Jdlrobson - as we get closer to actually putting this out, we should sit down with @egalvezwmf so he can document the process for other teams.

Should we block on dismissing the survey as in T113644? cc/@Moushira

@atgo, given that the user can scroll past (it's not an overlay) and the very low traffic, I don't see this as a blocker.

Jdlrobson claimed this task.

A first survey was run so I'm calling this done. Please open a new task if points in this need to be pulled into a new future survey.