Page MenuHomePhabricator

Classify completion candidate image results
Closed, DeclinedPublic8 Estimated Story Points

Description

We need a way to manage the NSFW content in the commons query completion candidates. Manual review and blacklisting simply wont scale, both the variety of terms as well as the maintenance burden of maintaining this list are too much for us to handle.

There are a variety of NSFW image classifiers openly available. Determine a reasonable one to use, and wire up a system to maintain a set of title and classifications in hive. This will be run daily, we should evaluate if there is benefit to keeping a running dataset of classified titles, rather than re-classifying everything daily (i suspect we shouldn't re-classify, but experimentation needed).

Event Timeline

CBogen set the point value for this task to 8.Aug 10 2020, 3:36 PM

Note that we haven't determined path towards agreement to implementing a NSFW classifier. I'm moving this ticket to "Waiting" until that is sorted out.

I'm meeting with @Keegan and JK this week to discuss, and I believe Erica is bring this to legal's attention for a review also.

@EBernhardson A couple of other notes:

See this ticket for discussion of using Open_NSFW in the past: https://phabricator.wikimedia.org/T225664

We also currently have images in CAT run through the Google NSFW classifier, but we don't have enough Google credits ATM to run the entirety of commons through it.

@Ramsey-WMF feel free to add more detail here.

@EBernhardson can you help me understand how the NSFW image classifier would populate the blocklist for the query completion? Are you just planning to use the list of terms that the image classifier relies on, or is there more to it?

The prototype i've put together does the following:

  1. Start with the full set of completion candidates. For commonswiki this is ~20k queries
  2. Run them all through the public search endpoint to collect the top 20 results with ~250px thumbnails
  3. Run the thumbnails through a classifier that gives scores for is_safe and is_nsfw
  4. Play with some thresholds to turn the scores for individual images into a boolean is_nsfw
  5. Play with some thresholds for the # of allowable nsfw results per query (in prototype, one nsfw is safe, two or more is nsfw)

From there we have an is_nsfw marker for all queries and can make decisions about what to do with the queries from there

@EBernhardson as part of Computer Aided Tagging's machine vision platform we currently use Google's SafeSearch classifier system to prevent "surprising" images appearing in the popular images feed. This means that we have hundreds of thousands of images in a DB already that have some NSFW flags, but it's in a CAT DB that nothing else uses.

Would it better for us to simply expand the usage of this Google system?

One benefit of using what we already have is that we've already set thresholds using Google's SafeSearch criteria. SafeSearch Detection detects explicit content such as adult or violent content within an image. This feature uses five categories (adult, spoof, medical, violence, and racy) and returns the likelihood that each is present in a given image. For our purposes, we use everything except spoof (which was pretty useless).

We get SafeSearch scores for "free" when using Google Cloud vision for label detection, but we have some budget room to make those requests separately if needed. If there's an open classifier system that's better, we should certainly explore using it, but Google is pretty good at this and we're already using the system anyway.

Better is hard to say, but certainly it's something that we could consider.

While the current prototype reaches out to query the mediawiki api's directly as a means of simplification, any production level deployment will need a different method of sourcing thumbnails of top images related to the query. As MachineVision is a MediaWiki extension this also excludes querying any MachineVision apis, meaning integration plausibly has to happen by reading analytics replicas of the MachineVision databases. We could potentially evaluate how much overlap there is between the set of pages we need classification of and the set of pages available, but if the overlap is too small there isn't any obvious path for code in analytics network to ask the mediawiki side to generate particular data.

Overall, i suspect it would be a bit of a sandpit. But the calculus around future maintenance burden isn't particularly simple. The open_nsfw classifier I applied in the prototype is also essentially abandonware, no updates will be coming to it. In general there doesn't seem to be much progress in open models of this nature being released in public today. Most of the functionality provided by the classifier comes from the weights though, the related bits of code we have to maintain are quite thin and relatively minor.

In a more ideal world where there is a trivial way for analytics jobs to query api's that transform page titles into is_safe predictions that could be a great simplifier, but in a quick evaluation there are significant roadblocks to making that happen.

Gehel triaged this task as High priority.Oct 28 2020, 1:28 PM

Just FYI, there's a "Bad Images" list curated on enwiki that may/may not be helpful here: https://en.wikipedia.org/wiki/MediaWiki:Bad_image_list

Classification of the images is being done with the MachineVision extension, by reading their database. T274220 is populating the database with common results for image search.