As we work on an "add an image" structured task, there are two main things that are going to affect the success of users presented with suggested images:
- How accurate is the algorithm?
- How hard is it to confidently verify a match?
In other words, the algorithm might be quite accurate -- but it still might be difficult for the user to verify that it's accurate given the information. For instance, if the article is about a person, and the photo is of that person, then the algorithm was accurate. But if the photo's title and description don't contain the name of the subject, it might be hard for the user to verify that they really match.
Here's our idea for getting a sense of how difficult the task is and what metadata users need in order to complete it. This will help us tune the algorithm and design the user experience.
- @Miriam can generate a list of something like 1,000 image recommendations for unillustrated articles in English Wikipedia, like was done on the "first version" of the algorithm in T256081. This time, though, we want to include lots of metadata.
- Commons description
- Commons caption
- Other wikis that the image is used on for that same article
- Depicts statements
- Commons categories
- Source of the match (Wikidata item, Commons category, cross-wiki, etc.)
- Anything else the user might like to have?
- The Growth team will then take that dataset and use it to back a simple tool that displays one match at a time, along with some portion of the metadata. Users can open the article to check it out, and simply click "yes", "no", or "skip" to see the next suggestion.
- Then we run usertesting.com tests where we ask testers to go through 15 suggestions or so, talking about how they are deciding whether to make the match. We could even run tests using different subsets of the metadata to see which works best.