Page MenuHomePhabricator

Technical Collaboration's expectations on surveys for FY2016-17
Closed, ResolvedPublic

Description

@egalvezwmf is asking for our survey support needs from Learning & Evaluation during FY2016-17. He needs a rough estimation asap. If you have clear candidates, please list them below.

  • Who is your target audience(s) for your primary work?
  • What measures do you need to help make decisions? Think about your KPIs, also, think about what we want to prevent/avoid or what are you trying to solve.
  • What decision(s) will this data inform?
  • What large one-time surveys are you planning next year?

Surveys tentatively planned

Event Timeline

Qgil raised the priority of this task from to Needs Triage.
Qgil updated the task description. (Show Details)
Qgil added subscribers: Qgil, egalvezwmf.
Qgil triaged this task as High priority.Feb 10 2016, 1:02 PM
Qgil added a subscriber: Rfarrand.

Currently I'm aware of the participant satisfaction surveys @Rfarrand runs after the Wikimedia Developer Summit and the Wikimedia Hackathon (the Wikimania Hackathon survey is part of the general Wikimania one). However, until now she didn't need support from other teams.

In T124041: Technical Collaboration narratives and budget for core work, we are suggesting an annual survey as one of the KPIs, and this one would surely welcome support from our experts.

And we have also mentioned the possibility of running a Community health survey for Wikimedia technical spaces, which perhaps could also be run on an annual basis: T116370: Community health survey for Wikimedia technical spaces

There might be some overlapping between these two surveys, and they might also have an overlapping with Support & Safety's Harassment survey if they keep running it.

The Product teams that Community-Relations-Support work with might run more surveys. @egalvezwmf is asking us whether we are aware of any plans there.

Qgil edited projects, added Surveys; removed Community-Wishlist-Survey-2015.
Qgil set Security to None.

Currently working on T125632: Plan, write and submit a satisfaction survey concerning Flow to communities, but you and Edward already know about that.
I also have the community side of T113490: 10 tips for communicating with communities when developing software

Johan and I plan to have a consultation with translators community, but it would be a small survey, only in English.

Thanks @Qgil

Hello, is L&E planning to run the post Wikimania survey this year? Just as with last year it would be good to partner together to make sure the right questions are being asked to the hackathon participants.

I am also hoping / planning to finally move from google forms to qualtrics for the hackathon in Jerusalem. I think I can put together most of the survey myself, however I may have a few questions.

Thanks @Qgil for starting this.

Some tips for everyone: Focus on who are your primary audiences; then come up with what data you need; especially on an on-going basis (e.g. satisfaction, awareness, demographics, behaviors). Its important that we focus on our priority audiences for this first pass. Good to record secondary audiences as well or any other major projects you want to do.

Also please share any long-term data that you need: for example: evaluating the product development process might be something we want to do annually, or measuring awareness of new tools, etc.

@Rfarrand - From my understanding, we generally do not lead the work. Instead, we work in partnership with the WIkimania host. I am not sure about the survey for this year - I think @JAnstee_WMF will be leading that partnership work, and I believe I can support with question/survey development as well.

I'd be happy to help you with your qualtrics survey for the hackathon - I am now the primary contact for support with qualtrics.

Also - we are called "Program Capacity & Learning" now :)

@egalvezwmf - you mention focusing on primary audiences - but they may change depending on the product. For example, with a VE survey that happened a year ago, we focused on a few different audiences, including those who made the most edits using VE, users who have given feedback on VE, etc. We found that finding those specific groups of users allowed for higher engagement (I believe it was something like a 25% response rate), but because we used targeted talk page messages, we could probably not do that frequently.

A more simplistic attitude around the audiences might be:

  • Editors
    • Editors who have made more than 50 edits (or 1000, or 50,000 edits)
    • Editors who have been contributing for 6 months (or 1 years, or 5 years)
    • Content contributors who have failed at uploading a single photo (or someone who has uploaded 10 photos, or 10,000 photos)

Does this make sense, or is this not in the direction you were considering?

Hi @Rdicerb - great point. based on what you describe, I am seeing three key areas:

(1) content contributors who have used x product and
(2) content contributors who engaged in feedback about x product

Whether we group all these products into one survey (for everyone who might be using these products) or whether we split them up will probably depend on a few factors like what tools are available, timelines, and what data is needed. Would be great if you can specify which products are most important next year, if any, and what kind of data you need about these products to make decisions. We want to get ahead of the needs so we can coordinate (if its needed).

Overall, I just need a rough idea about how many different products might need surveys first. Later on, we can dig deeper to figure out how we can combine surveys and/or efforts as much as possible to save resources and volunteer time.

(3) All editors who use editing tools/products

For this third area, a suggestion would be to think about what types of information is needed about all editors as a whole to help with product priorities and product development. One example might be awareness of new tools in beta, awareness/interest in product development process, etc.

Hope this helps. Happy to meet about this as well to talk it over as well if that will help

There's talk of a general editor census; Edward is already aware of this.

If T89970: Enable microsurveys for long-term tracking of editing experience existed, then I'd want to run a single-question survey at several Wikipedias in the weeks before and after deployment of the Single Edit Tab. But I really don't think that we can justify a big survey for that. (This would also be handy for tracking long-term projects, like user trust and overall satisfaction.)

Yes - thanks for bringing up the census @Whatamidoing-WMF.

Editing wants to get demographics only; their focus is on editor data, and less about "community" data or data about the people we work with directly and who directly give input to our work on phabricator, mailing lists, talk pages, etc.

Also - I believe that design research is considering using "benchmark usability tests". It involves measuring how long it takes users to complete a certain task (e.g. make an edit), and then measure that same task again from one year to the next to see how achieving tasks improves over time. I think surveys are used to some extent for those, but not sure how many; not sure this will count as a major project.

I have reported back to @egalvezwmf. The are still some open questions, but the discussion in this task helped getting a good overview. Thank you to all contributors!