Page MenuHomePhabricator

An enabled configured quick survey shows on the mobile version of the site
Closed, ResolvedPublic5 Estimated Story Points


If a configured quick survey is enabled, configured and targeted towards mobile, and bucketed to n% (n>0) of the users, those chosen users will see a widget with the question of the survey and the possible answers on the mobile site.


  • Survey is visible to a bucketed user with the configured values
  • When the user answers the survey, it won't show again for them.
  • Title, body of text and possible answer buttons come from configuration.
  • Clicking on an answer
    • will show a thanks dialog, the survey won't be shown again for the same user.
    • will send an event logging event with the information required by the schema at T107747

Baseline mockups:


Screen Shot 2015-07-31 at 5.50.02 PM.png (673×589 px, 84 KB)

Screen Shot 2015-07-31 at 5.51.02 PM.png (673×588 px, 92 KB)

Screen Shot 2015-07-31 at 5.51.02 PM thanks.png (610×588 px, 71 KB)


survey-01.png (732×590 px, 68 KB)

Related Objects

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

@Jhernandez, the basic layout looks fine. Same thing here - messaging should be backed by translatewiki.

It should be possible to conduct multiple types of surveys in parallel.

I recommend use of a centralized schema that all surveys will use, so as to avoid overhead in creating new schemas. The following fields should be required:

  • String: survey code (e.g., a survey may be classified as "fontincrease20150731-day" supposing this is a one day survey)
  • String: survey response value (e.g., in this example "yes", "no", "unsure" would be specified by the client; values should be expressed unlocalized)
  • String (probably not enum, because there could be more types): channel (clients would specify "desktop", "mobile web", "app" for now)
  • String (probably not enum, because there could be more types): mode (clients would specify "alpha", "beta", "stable", "prototype" for now). We may determine that we want in some cases to unilaterally run a survey with a given probability in from the JavaScript in a particular mode, instead of relying upon central server config in mediawiki-config (CommonSettings / InitializeSettings / etc.)
  • Boolean: whether the user was logged in.
  • Enum: editCountBucket ( "0 edits", "1-4 edits", "5-99 edits", "100-999 edits", "1000+ edits")
  • String: country code, if known (n.b., this is available from the GeoIP cookie's first field in its colon separated list). "Unknown" if unknown.

Do not capture the username. Other information for aggregation purposes can be obtained from the event capsule itself (in particular, the wiki and userAgent field can be parsed).

Is this stuff going to be configurable so it can be targeted on a per-wiki basis?

@dr0ptp4kt I assume you are suggesting EventLogging as the mechanism to store results?
If so I'm fine with the proposed storage model.

With respect to It should be possible to conduct multiple types of surveys in parallel.
I think to start with it would make sense to support one survey at a time and prove the hypothesis that we can get value for them. Running multiple surveys complicates the solution prematurely imo.


Yes, let's plan on Event Logging. We got the okay on this approach for microsurveys.

Regarding running surveys in parallel, I agree in practice we should start by conducting them one at a time. I think the key thing is to avoid architecture that would make it too hard for us to actually run them in parallel as we learn the strengths and weaknesses of doing this stuff.

@Jdlrobson @dr0ptp4kt Since it'll be configuration based, we'll be able to specify specific config options on the initializesettings php on a per-deployed wiki basis if necessary, but we're not optimizing for that use case.

@Jhernandez Mocks look fine, I edited a couple things and added to the task description.
We shouldn't use 2 different primary buttons on the same screen, so with this survey we should have them all neutral buttons. In T107589 though its fine to have 1 constructive and 1 neutral.
I added a bit more padding between the buttons and text below.
Also made the text below slightly larger.


Great stuff. We should remain mindful about principles of good survey design, e.g. considering the possible impact of the answer button colors on the survey results (the coloring of options has been shown to influence answers in certain situations). I guess this is another argument in favor of @KHammerstein's point about using neutral buttons. In the same vein, a randomization option might be useful.

(And BTW the wording of the question in the second mockup is quite leading, but it's just an example of course.)

Also, how about a dismissal button for those who decide they don't want to participate after seeing the question? (also recorded in the schema; will ask in T107747 as well)
Will we track response rates?

As I commented in T107747:

AFAICT this schema is meant to track a user's response to a survey. If we wanted to track a user's engagement with a survey – which we do, right? – then that should be tracked with a different schema, e.g. Schema:QuickSurveysEngagement.

@Tbayer the answers (buttons with it's types (constructive, destructive, neutral)) will be configurable by the person designing the survey, initially we're not going to impose any restrictions, the person designing the survey will need to be mindful & careful of what predefined answers and their type they define.

See T107586 for the configuration options that the survey designer will have available, specially regarding the answers of the question:

  • possible answers as a list of text options (i18n json key, + json qqq & en values) with a qualifier (positive, neutral, negative)

@phuedx I think it's time we pushed hard on T96155
Also we may want to upstream your experimentation code.
Take a look at my first stab (in particular the js code)

I really don't want to make QuickSurvey depend on MobileFrontend. Seems a good opportunity to get some upstreaming done. Whaddayathink?

Feel free to grab this card during European hours if you fancy working on it in any form... just be sure to assign it to yourself.

I really don't want to make QuickSurvey depend on MobileFrontend. Seems a good opportunity to get some upstreaming done. Whaddayathink?


The experiments API will have to be modified slightly to accept an arbitrary configuration and an arbitrary ID as it currently called mw.mobileFrontend.user.getSessionId – which has side effects – transparent to the caller, i.e.

var experiments = M.require( 'experiments' );

experiments.getBucket( 'foo' );

would become:

var experiments = mw.experiments,
  experimentsConfig = getExperimentsConfig(),
  uniqueId = getUniqueId();

experiments.getBucket( experimentsConfig, 'foo', uniqueId );

We could either upstream the experiments code or create a library that we share between the two extensions /cc @Jhernandez

Change 230099 had a related patch set uploaded (by Phuedx):
Tweak the layout of the experiments code

230099 should make extracting the experiments code easier.

If anyone wants to help out with the tasks I've listed in the TODO on that would be much appreciated. Feel free to split out a sub task an assign it to yourself. I'm trying to use this as an opportunity to find ways mobile code and oojs ui can become more similar to each other.

Needs eyes:
@bmansurov remaining to dos:

  • Pass config from backend to frontend
  • Add oojs ui buttons to the PanelLayout
  • Find a better way to register modules other than polluting mw [T108655]
  • Apply PanelLayout theme changes using Panel.js in MobileFrontend as basis in oojs ui
  • Irresponsible to load both hogan and mustache on mobile?
  • Irresponsible to load oojsui when View code exists?
  • Experiments needs upstreaming [Id23edeffb3cd025bf0db7f80e4133e5334e704f7]
  • Style intermediate panel better.
  • "Pass config from backend to frontend" is done at T107586
  • "Irresponsible to load both hogan and mustache on mobile?" - Yes it does sound irresponsible to do so.
  • "Irresponsible to load oojsui when View code exists?" - Maybe in the beginning, but in the long run it's a non-issue when MF moves away from View.

So I've been trying to build this in oojs ui which is why it might seem to be taking longer. I've almost visually got a working demo - just need to sort out i18n and click handling before we might consider merging.

I'm trying to find some common ground between the two libraries so we can use OOJS UI.
Essentially a template is used to auto-generate the $content option in QuickSurvey.prototype.widget
Would appreciate some eyes on the code and pull out the good ideas and scrap the bad ideas.

@bmansurov in terms of the loading hogan or mustache I'm not sure how to solve that problem yet :-/ We could try hijacking the mustache compiler but that might get messy...

Change 230099 merged by jenkins-bot:
Tweak the layout of the experiments code

I've upstreamed the mobile.experiments module in 231288. is in a good place to review now. I've noted caveats in the commit message but I'd love some input before going any further.

I'd be keen to merge in current form and iterate off this if at all possible.

I'd be somewhat happier merging 230001 if we were working in a dev branch.

@Jhernandez: You cool with trying out the dev branch workflow on QuickSurveys?

3:45:41 PM <joakino> phuedx: fine with me
3:45:48 PM <phuedx> cool
3:45:53 PM <joakino> there's nothing to deploy yet so 👍

This now works with the dev branch (see

Can we leave this in sign off and add sub tasks of the remaining work that needs to be done to avoid confusion?

@Jdlrobson 👍, ping me when it's really ready for signoff.

@phuedx, @Jdlrobson, @aripstra Adding more use case context, per retro conversation:

First Survey:
The impetus behind this project was that in Q1 planning, when we did the walk-throughs (with JOH playing an excellent old man) it was obvious that we all had different assumptions about what the relative weight of our primary use cases were. As someone pointed out on an unrelated thread this week, knowledge of the present can reinforce existing use cases and miss opportunities, but I believe it is a necessary first step towards identifying the usage patterns we need to protect and even improve, while we are exploring any new opportunities:

So. I want to end up with a % by platform (and ideally project and even country) of what are users are coming to WP for.

  • learn what something is (summary/definition, e.g. what is strep throat)
  • lookup a specific fact (e.g. "what treatments are used for strep throat")
  • learn about a subject (e.g. "I need to browse around early american union activity")
  • deep analysis of a specific topic (e.g. "I want to know everything about the rock dove")
  • just killing time
  • other _____

I want to end up a dataset from which I can derive:

Desktop, En, US, 60% learn what something is
Mobile, En, US, 10% lookup a specific fact

Thoughts on whether or not that goal is stupid or how it can be improved?

How we get that is up to Joaquin and Anne, with support from Design Research (to the extent that it is helpful). It might be that we ask each user about each option, but it also might be that we just ask one at a time in buckets and compare the "yes"/"No" ratio.

My only concern with the current plan is that the in-article embedding means that certain kinds of articles will surface the survey sooner or later (depending on lead paragraph length). We need to think seriously about how to correct that bias. A top-of-article (below banner) or overlay are the only options I can think of.

In dev this is happening but it does need some tweaks but I am satisfied enough to call this done. I think I'll break out some smaller tasks and attach them to T104439 to ensure these are tracked.

@JonKatzWMF let's talk about the first survey in T111445