**Operation People-Understand-ORES**
- What are the things everyone in the project should know?
-- ORES is a backend service that does useful stuff for editors with algorithms
-- Killer use-cases are counter-vandalism and measuring article quality
-- ORES works in 58 wikis and is expanding by 2-3 wikis per quarter
-- The #scoring-platform-team (SPT) is very understaffed with no dedicated product support
--- The SPT doesn't build any end-user facing tools -- just the backend that lets them use predictions.
**Project details**
- What do you want to achieve (goals)?
-- ORES usage go up across wikis -- people understand what it is for and are able to use it
-- Feedback loop to hear from communities. Opportunities to improve / something is not working.
- Who is your target audience (be specific)?
-- Patrollers, admins, and community organizers (WikiProject Organizers, Editathon organizers)
- How do you want your audience to feel?
-- Informed and Understood
- What medium do you have in mind and how do you want to distribute it?
-- Phabricator
--- PROS: Easier to streamline tasks
--- CONS: only in English
-- Mediawiki talk page:
--- PROS: more accessible
--- CONS: needs manual conversion to task on Phab.
-- ORES page on WP
--- PROS: Very close to target audience
--- CONS: not all WPs that deploy this tool have one. E.g.: https://it.wikipedia.org/wiki/Progetto:Patrolling/ORES
- Do you already have an idea of what you need to roll out your project? If so, please check all that apply (i.e. [x]):
-- [] Blog
-- [x] Page on wiki
-- [] Social media
-- [x] Email or MassMessage
-- [] Presentation
-- [] Video
-- [] Infographic
-- [] Diagram
-- [] Art piece
-- [] Other: _______
-- [x] I don’t know
- How are you measuring your success (key performance indicators)?
-- Usage of ORES-powered tools (e.g. new filters for recent changes, huggle, RTRC, etc.)
-- Improvements in model fitness (e.g. accuracy, precision, recall, ROC AUC, PR AUC)
- What organizational goal is this linked to? (please link to current Annual Plan details)
-- Knowledge Integrity, Machine Learning Infrastructure
- When is this due?
-- Planning: FY20 Q1
-- Production: n/a
-- Launch: FY20 Q2
- Please provide additional context & direction:
-- Previous attempts:
--- https://www.mediawiki.org/wiki/ORES/Get_support
--- https://tools.wmflabs.org/ores-support-checklist/
--- {T182054} (Failed to start)
--- {T164331} (Failed to get clear buy-in from Growth)
-- Relevant visual material:
--- One presentation on good ORES feedback: https://www.youtube.com/watch?v=rsFmqYxtt9w#t=30m00s
-- Existing ideas:
---
- Who is leading this project on your team?
-- Aaron Halfaker - Principal Research Scientist - ahalfaker@wikimedia.org
-----
### What is the problem?
The Scoring Platform team works with Wikimedia projects to deploy machine learning models on their projects in their language. These initiative often succeed at engaging the communities at all stages, from initial investigation to initial deployment to even quality control. This is not only good practice, it is required: without participation of the communities, we do not have the language assets needed to deploy models.
ORES currently supports ~50 wikis. https://tools.wmflabs.org/ores-support-checklist/ We are interested in supporting more wikis over time, as well as having conversations with our communities about quality control.
In some wikis (e.g. Hungarian Wikipedia) ORES remains unknown and underutilized by patrollers and other editors who would benefit. Also, we never hear from many communities about how their models are working (or not working). We'd like to improve our ability to communicate and be communicated to. We'd also like to figure out a better process for sharing responsibility with product teams.