Page MenuHomePhabricator

Microsurveys: Don't keep asking me too often, but do ask me again, eventually, on some surveys
Open, MediumPublic

Description

So here's a situation that may require some thought (Kudos to @Trizek-WMF for pointing it out):

  • I want to answer your question.
  • Once I've answered it, I really don't want to get bothered by you again. I'll be angry if you ask me the same question every day (and your response rate will decline).
  • But my view might change over time.
  • So, for some questions (and not others), it might be worth re-asking me, after a delay that is long enough that it won't irritate me.

Event Timeline

Trizek removed a subscriber: Trizek.

Some scenarios with realistic show-rates (how often one will, without any muting, encounter a survey) could be helpful in planning this, e.g. one for a person of a group that might be asked relatively often, one for a less frequently asked and another one for readers or so.

  • For most surveys, wouldn't you expect the average response from a large enough sample to reflect any general changes in opinion on a topic? If ~20% of people change their mind about a topic after a given survey, that should be obvious in later iterations of the survey, whether or not you re-survey any particular person. The size of the survey and the exact proportion of opinions held would determine the confidence interval and thus how small a shift you'd be able to detect, but with large surveys that should be a few percentage points at most, so any big shift should be clear, whether you re-survey any particular individual or not. I'd be curious to see an example of a specific question that really needs to be re-ask of the same person; none come to mind, but that doesn't mean there aren't any!
  • It seems like this mechanism also needs to know how to interact with the general constraint of not surveying too often. Say—using arbitrary numbers for the example—that survey A can be re-asked in 3 weeks, but in general we decide we won't expose someone to any other survey for two weeks after they answer a given survey. So, a user takes survey A in week 0, and so theoretically can be re-asked in week 3. However, they take survey B in week 2, and so shouldn't shown any survey until week 4. Survey A has to know about it's own timeline (week 3 is okay to repeat A) and the general survey timeline (but no survey should be shown until week 4). I think this is covered by checking a "next available date" for the specific survey and a general "next available date" for all surveys.
  • It seems like this mechanism also needs to know how to interact with the general constraint of not surveying too often. Say—using arbitrary numbers for the example—that survey A can be re-asked in 3 weeks, but in general we decide we won't expose someone to any other survey for two weeks after they answer a given survey. So, a user takes survey A in week 0, and so theoretically can be re-asked in week 3. However, they take survey B in week 2, and so shouldn't shown any survey until week 4. Survey A has to know about it's own timeline (week 3 is okay to repeat A) and the general survey timeline (but no survey should be shown until week 4). I think this is covered by checking a "next available date" for the specific survey and a general "next available date" for all surveys.

I agree with this — it's difficult to know 'how soon is now' when doing surveys.

Maybe this is something we can ask the community for feedback on — before making any decisions — and average out the responses?

I think that this option would be useful for long-term general questions, such as "Would you recommend this wiki as a great place to contribute?" I might think it's a great place this week when I post my new article, and a poor place next week, after it's been deleted.

In particular, if you're looking at smaller communities, if you ask a question like this for only 1% of page views, then by the end of the year, you will have already asked this question of every regular contributor. Even at mid-size and larger wikis, you could easily run out of certain sub-groups (people with ≥100K edits, admins, people whose accounts are more than ten years old, etc.) if you don't allow re-asking a question.

Maybe this is something we can ask the community for feedback on — before making any decisions — and average out the responses?

We ourselves seem to have a hard time to assess possible consequences ("What does 1% pageviews mean"), so I think it is not easy for the community either. So I would not suggest this.

You will have already asked this question of everyone. …

Good point… so, for this issue I see two ways to go:

  1. We set a cookie at users side that saves the ID of the question and the (rough) date. Then we can just say: Try to play out to n% or until a 100 people or whatever AND if the cookie is set it must be farther back than N weeks (by that we could also find how many repeated responses we have). This is the same that is done for the donation banners I think.
  2. we don't want to set a cookie. Then we just need to set the rate reasonably low. BUT since we can't know if a user was in the sample already, we will inevitably show it to some users "too often" and to some "too few". But we can calculate how often that would happen in average (It is a bit statistics math, but not hard to do, and could be a little widget that is integrated in the UI)

All in all, I find 1) better since the solution is tested on the banners and I don't see a way to prevent "oh no, again" reliably without some sort of marker like a cookie.

Option #1 (setting a cookie or similar) would also be useful when you don't want repeats.

matmarex added a subscriber: matmarex.

Removing MediaWiki-Page-editing so that this doesn't clutter searches related to actually editing pages in MediaWiki. This project should only be on the parent task, probably (T89970). [batch edit]

Jdlrobson moved this task from Bugs to Feature requests on the QuickSurveys board.