Page MenuHomePhabricator

Research on article writing lists
Closed, DuplicatePublic

Description

This prototype illustrates some of the ideas about translation lists (T96147) and suggestions (T87439). We want to define a plan to apply at Wikimania.

Goals

We want to determine whether the design ideas for suggestions and lists are effective to:

  • Find relevant articles to translate.
  • Encourage users to translate more.
  • Be more effective by better organising their translation efforts.

Research questions

About current behaviour

  • How are users normally finding which articles to translate?
  • How are users keeping track of these articles if they don't have time immediately to work on them?
  • Which factors help users decide which articles to translate (e.g., featured article, number of views, reliability based on reference count, etc.)?
  • Is being part of a campaign a motivation to translate more articles?

Suggestions

  • Is the purpose of suggestions clear?
  • Is the mechanism of "saving for later" useful to keep the relevant suggestions?
  • How good suggestions need to be in order to be useful for the user? Are similar topics to the ones you edit a good choice to base selections on? Is more control (e.g., selecting a category such as "science" to get suggestions in that area) desirable?

Lists fo articles

  • Is creating lists useful to keep track and organise what to translate?
  • Are shared lists useful to participate on collaborative translation efforts?
  • Is creating campaigns a useful way to ask the community to get important articles translated?
  • How do users perceive the relationship between "for later", "custom lists" and suggestion lists? Are those multiple kinds of list making things easier to understand or confusing users?

Additional ideas to test

  • Are statistics providing a clear overview of the user progress?
  • Is the simplified "new translation" dialog an effective way to create new translations.

Proposed plan

Before the test

  • Do you translate articles when contributing to Wikipedia?
  • Do you know Content Translation? Have you used it?
  • When doing translations, how do you decide which articles to translate? How do you find them?
  • If you find an interesting article to work on but you don't have time right now, do you keep track of it to wok on it later?
  • Have you participated in any translation campaign with other users (showing this campaign)? Would you be interested in participating in similar campaigns?
  • Let's try this prototype. It is a very basic prototype to try and discuss ideas about how to translate articles. The prototype assumes you are a Spanish speaker that is interested in translating articles about science and nature.

The test
General understanding of the "suggestions", "in progress" and "published" sections:

  • This is a tool to translate Wikipedia articles. Based on what you see in this view, what can you do here? (The user is expected to identify the possibility of starting new translations, the suggestions, the statistics, the help area, and language selection).
  • Are there other main views? Can you tell which is their purpose? Which information do they have or which information would you expect there? (the user is expected to identify the three main sections representing the steps of the translation process: finding articles (suggestions), translating them (in progress) and completing them (published))

Suggestions:

  • You may notice there is a list with pictures of animals, which is the purpose of that? (the user provides more details on how suggestions are expected to work)
  • Which suggestions are you provided with and which information you have for each one? (the user is expected to go through the suggestions and identify aspects such as the possibility of discarding them, the expectations behind the menus and the impact indicators such as "featured").
  • What would you do to start translating the "Golden toad" article? (not supported in the prototype but good to see if users click on it)
  • Let's imagine that you are very interested on translating the "Lion" article but you don't have time for it now. What would you do to make it easy to find that article in the future, when you might have more time? (the user clicks on the star button next to the Lion suggestion).
  • Can you describe what happened? (the user should confirm that lion is now part of the "for later" list)
  • Can you tell me, but don't do it, what you'll have to do if in the future you want to translate or you are no longer interested in this article? (the user is expected to explain that by clicking the item you can start translating it, and by clicking the star you can revert the addition to the list)
  • When you save a translation for later, what do you expect to happen? Would you want to keep track of changes to this article, once you've saved it for later? Why?
  • Are the current suggestions provided interesting for you? What would make these suggestions to be interesting? Would it help to be related to the kind of topics you normally edit? Do you think suggestions would be better if you could select a different category other than science and nature? (the user is expected to provide details on what makes a good suggestion and the degree of control expected)

Lists:

  • You may have noticed that the suggestions include a "Wiki Loves Nature" list. Let's imagine that you want to keep track of your favourite scientists articles that are missing in Spanish. Would you be able to create a list named "Famous scientists"? (the user is expected to click on the "dd folder" icon).
  • (when the new collection dialog is visible) Pleas, can you describe which options you have when creating this list, and which is their purpose? (the user is expected to describe the purpose of the campaign configuration options).
  • Would you be able to add the article about Albert Einstein to the list of scientists? Can you describe hat do you see in each step? (the user is expected to click on add page and search for "Albert Einstein", it is interesting to check how useful the initial suggestions are perceived)
  • Imagine that you want to share this list with other friends interested in science for them to add more articles, what would you do? (the user is expected to click on the link icon, which does not work but may help us to understand how collaboration is perceived in this context)
  • Let's imagine you are interested in the "Wiki Loves Nature" campaign, what would you do to translate some of those articles? (the user is expected to add the list by clicking on the folder board and explore it's contents on the "All collections section").

Additional ideas:

  • Based on the statistics shown, how productive do you think this user is?
  • Can you go to the "In progress" section. Would you be able to start a new translation about "Albert Einstein"? What would you do if you want to translate it from German to French instead?

After the test

  • Which is your general impression of the tool you tested?
  • Which are the aspects that seem to work better for you based on your existing experience as a translator?
  • Which are the aspects that don't seem to work based on your experience as a translator?
  • Do you expect to easily find relevant articles to translate with the tool you tested before?
  • Do you expect to be translating more with this system? why?
  • Do you expect to be more effective with by using these tools to organise your translation efforts?

Event Timeline

Pginer-WMF raised the priority of this task from to Normal.
Pginer-WMF updated the task description. (Show Details)

Is this the same as https://meta.wikimedia.org/wiki/Research:Increasing_article_coverage ?
I'm getting confused by the many CX tasks on this topic, especially as terms are used in a very confusing way. It's not clear where to comment on the general ideas either. Most comments these days are on https://meta.wikimedia.org/wiki/Research_talk:Increasing_article_coverage anyway.

Is this the same as https://meta.wikimedia.org/wiki/Research:Increasing_article_coverage ?

No. There are several parts in progress intersecting in this area, I try to clarify:

  • The present task is about how users will interact with our tool when they need to collect several articles to translate later or create translation campaigns. That will be tested by using interactive prototypes that simulate to provide suggestions to the users but the list will be just a static list of articles that I'll pick to simulate a scenario (e.g., a bunch of articles about nature and science). This will be useful to test the designs about suggestions (T87439) and custom lists (T96147), but to implement those designs we'll need a service that is able to generate the real suggestions.
  • The research you linked to is an experiment to identify the best strategy to provide suggestions. Here we are talking about finding real articles that the user can translate. The research team is evaluating different strategies to select relevant articles and evaluating which work best.This will inform the development of the suggestion service I mentioned above.

Hope this is useful to clarify things a bit. As I complete the prototype (T102768) and the test plan (T104353), everything should be easier to visualise.

Amire80 moved this task from Needs Triage to CX6 on the ContentTranslation board.Jul 2 2015, 4:36 PM
Pginer-WMF updated the task description. (Show Details)Jul 8 2015, 11:13 AM
Pginer-WMF set Security to None.
Pginer-WMF updated the task description. (Show Details)

@Capt_Swing I created an initial version for the plan, feel free to add any comments or edits.
I guess going together through the prototype (or capturing that in a video) may help, but I wanted to share this early.

Pginer-WMF updated the task description. (Show Details)Jul 8 2015, 5:20 PM

Thanks, @Pginer-WMF. What's the timeline for this? When do you need feedback from me (or other DR folks), when do you want to run user studies, and who's running 'em?

The idea is to organise this in two stages:

  • At Wikimania I plan to conduct in-person testing with editors I can find there. We can probably find users that translate and may have participated in campaigns but probably not those who organise the existing campaigns. It would be good to learn, iterate the designs and discover new problems to be solved.
  • After Wikimania we can target for organising remote research sessions to validate the more detailed ideas (based on the initial feedback from Wikimania).

So, in terms of timeline:

  • For Wikimania (next week), I can go with the current version (it will be better than just talk about the general idea without any structure), but if someone in the user research team has time to quickly go through it and spot the most obvious problems or suggestions to improve it would be great.
  • After Wikimania, we can probably aim for end of August or early September to start with the research. I'm not sure the resources that will be assigned to this, but I can try to do the research myself if there are none available.

Got it. Thanks @Pginer-WMF . I haven't looked at the prototype too deeply, and I'm not all that familiar with ContentTranslation yet, BUT the protocol itself looks good for the kind of information you're seeking. Perhaps we can chat at the Hackathon? If nothing else, it would be a great opportunity for me to learn more about the current functionality and future plans for CT ;)

Perhaps we can chat at the Hackathon?

Sure. I'll be really happy to do that.

Created task to provide feedback on research proposal T107462

Nemo_bis removed a subscriber: Nemo_bis.Jul 31 2015, 9:28 PM

@Pginer-WMF, here's some feedback on the test plan. Overall looks good!

  1. If the test user is a user of the current content translation system, start out by asking them what works well, and what does not work well, about the current translation system. Ask this when they are looking at the current Content Translation interface, and before you show them the prototype.
  1. Some of the questions are phrased in a way that leads the user towards a particular response. Instead of "When adding the lion article to that "for later" section do you expect any other automatic changes like getting it added to your watchlist to track the changes, or you expect this classification to be for translation purposes." Say "When you save a translation for later, what do you expect to happen?" And if they don't say 'it will be added to my watchlist', then perhaps you can ask follow up questions like "Would you want to keep track of changes to this article, once you've saved it for later? Why? How would you expect to do that?"
  1. Overall, it would be useful to provide some feedback when the user clicks a UI element, even if it's just a transient pop-up explaining what happened in a single sentence. This is important for the user to build a better mental model of how your system works, and how it might differ from the existing system.
  1. Providing feedback when the user clicks a UI element also helps you avoid the following issue: Since not all of the UI elements in this prototype are clickable, you may discover during your testing that the users figure out how to "cheat" to get the "right" answer (users often want to "please" the tester, which makes them behave differently from how they would under normal circumstances). In this kind of user test, they may learn that they only need to hover their mouse over different elements on the screen and click wherever the cursor turns to a pointing finger in order to get the answer "right". When they do this, they are no longer really engaging the prototype the way you want them to: they aren't thinking about the prototype as a real interface, just looking for an active element. If this happens, it can make your data less useful. You can help avoid this by making all the elements active (they at least trigger a tooltip or short pop-up when you click them).
  1. A lot of your questions expect a the user to give a particular verbal response, sometimes a lengthy and detailed response (for example, the response expected to the question "Which suggestions are you provided with and which information you have for each one?"). You should be prepared with open-ended follow-up questions, or to switch the order of the questions in your list, if the user doesn't give you the response you were expecting.
Pginer-WMF updated the task description. (Show Details)Aug 4 2015, 8:32 AM

Thanks @Capt_Swing for your feedback. This is really useful.

I plan to use my time with users also to test the current status of the translation editor, so that should cover (1). I updated the task description to adjust the questions according to (2).

Regarding (3), I normally ask what was the user expecting to happen (and then add a clarification of what would happen if different). I think that is useful to learn about the user mental model. I see the value of informing what is expected to happen to keep the experience more fluent, but identifying these mismatches can be very informative.
Having said that, (4) is a good point worth paying attention to, and not making extremely obvious which parts are interactive and which are not can avoid the "guided tour" effect.

Regarding (5) you are totally right. I was expecting users to deviate from the expected answers, but it would be good to prepare follow-up questions to anticipate possible responses.

Pginer-WMF updated the task description. (Show Details)Aug 4 2015, 9:36 AM
Amire80 moved this task from CX6 to CX7 on the ContentTranslation board.Oct 15 2015, 9:01 AM
Amire80 moved this task from CX7 to CX8 on the ContentTranslation board.Jan 24 2016, 10:26 PM
Amire80 renamed this task from Research on article lists to Research on article writing lists.Feb 2 2016, 8:57 PM
Harej added a subscriber: Harej.Mar 8 2016, 3:09 AM

I would be interested to hear whatever learning you've ascertained from this process. Though specific to Content Translation here, this could be generalized to be about task curation in general.

@Pginer-WMF this was the ticket for our Content Translation campaign study, yes? If so, we should probably close it as a duplicate of T119087 (which I created, for some reason I can no longer recall), or close it as "resolved", or whatever. Thoughts?

@Harej findings are in this PDF, and more details in the page linked above.

Capt_Swing moved this task from Backlog to Blocked on the Design-Research board.

@Pginer-WMF this was the ticket for our Content Translation campaign study, yes? If so, we should probably close it as a duplicate of T119087 (which I created, for some reason I can no longer recall), or close it as "resolved", or whatever. Thoughts?

You are right. It makes sense to mark this one as a duplicate of T119087.

Restricted Application removed a subscriber: Liuxinyu970226. · View Herald TranscriptMar 21 2016, 9:11 PM

Done! Thanks.