Background
As per standard legislation, the WMF is required to take action on CSAM content (detecting and removing said content in a reasonable timeframe). We are setting up MediaModeration to scan for CSAM images automatically.
User Story 1
As a T&S specialist, I need a reliable way to be made aware of suspicious CSAM content so that I can check it and remove it if necessary.
User Story 2
As a T&S specialist, I need to be made aware of suspicious CSAM content as soon as possible so that I can quickly check and remove it.
Acceptance Criteria
- Images are being scanned automatically
- Scan is running continuously starting with the backlog
- New images are checked as close to upload time as possible
- API rate limits for PhotoDNA service are respected (max 200/sec, max 10M per month)
- If suspicious content is found, an email is automatically sent to the relevant email address