Background
The team is experimenting with suggesting machine generated article descriptions to assist users in creating short descriptions. We need to identify our target test languages and regions. The test audience needs to be large enough to understand the integrity of the algorithm and usefulness of the feature.
Task
- Pull baseline data about editors as defined in our Shallow Deck
- Answer: What is the minimum number of annotations and participants we need to hit for article descriptions to validate the hypothesis?
Important Additional Context