##Guided “tagging”
###User story
- As a **casual uploader**, when I contribute tags to images, I want to tag them with as little effort as possible, so that I can make them more findable for others.
- As an **advanced user**, when I monitor structured data edits, I want to filter machine-assisted depicts edits, so that I can evaluate the quality of tags and revert if needed.
###Value / Rationale:
Initially, the data we get for an image will only be as good as the user’s willingness or ability to describe the image with various properties. Having some kind of “suggested” tags functionality makes it easier for the user to tag images, which will in turn increase the amount of data.
###Condition of Satisfaction:
User is shown some on-screen functionality that displays some sort of recommendation for tags relevant to this image.
##Design principles
Working principles ([[ https://docs.google.com/presentation/d/1975sHzAEo5CESICLdFP0vYYOZuk3ayE0xSWuN5ixWJE/edit#slide=id.g5dfae1addb_0_24 | link ]])
- **Reduce complexity** - provide a mobile-first way for casual users to do micro-contributions by tagging their own uploads as well as a large backlog of images in need of tags (without having to rely on search or categories to find those images)
- **Be transparent** - make it easy for casual users to find explanations for what this tool is and how to use it; solicit explicit feedback that both casual and advanced users can give
- **Respect human work** - only ever suggest tags, never automatically add them to images; integrate the tool into advanced users' existing workflows
- **Don’t force binary choices** - encourage casual users to ignore/skip individual tags if they choose
- **Mitigate biases whenever possible** - model provider depending, prune the classifier list as necessary; make it as easy for advanced users to revert confirmed tags as other contributions in order to mitigate any intentional or unintentional biases; conduct internal and external research into biases and inequality this tool is facilitating on Commons
##Design links
[x] [[ https://docs.google.com/presentation/d/1975sHzAEo5CESICLdFP0vYYOZuk3ayE0xSWuN5ixWJE/edit?usp=sharing | Design brief ]]
[x] [[ https://docs.google.com/presentation/d/1El3MgNLMC6aJJL8nvQTEcsNAgisNsNQg_oeSE7ANT3E/edit#slide=id.p | Comparative review ]]
[x] [[ https://wikimedia.invisionapp.com/freehand/document/rNkDqQQgI | Storyboard & initial design exploration ]]
[x] [[ https://whimsical.com/KjDjWypox3H5T2n6uWYeeS | User flow ]]
[x] [[ https://phabricator.wikimedia.org/T233016 | Mocks ]]
[x] [[ https://phabricator.wikimedia.org/T230918 | Usability study ]]
[ ] [[ https://phabricator.wikimedia.org/T233031 | design echo notif icon and copy ]]
[ ] [[ https://phabricator.wikimedia.org/T234571 | design onboarding/empty state illustrations ]]
[ ] Test with functional prototype
###Resources
- Computer-aided “Depicts” tagging overview [[ https://docs.google.com/presentation/d/1Y59irgRzlVjh2THxzCywsqsK9_YfLn1FN3NXYwEeTXs/edit#slide=id.g4089212fc4_0_132 | slides ]]
- WMF design research report on [[ https://docs.google.com/document/d/1a6NXRAksc1ivNoo6arpDn6jQrCOhJ5toRhlnPyl2z2g/edit | recommended AI principles ]]