**What it does:** Image classification for commons uploads.
**Wiki thing it helps with:**
* Crude image categorization (human/selfie, dog, street, house, car) is easier than specific, biggest thing is that you need some train set (thus are not able to predict unseen categories)
* Find uncategorized images
* Find likely unwanted images (copyvio, etc.)
* Estimate image quality
* Estimate image creation date (1920s, 2000s) which could be used to verify PD-old claims
* AI which combines image elements with articles and suggests relevant images (combine text based)
* Automatically generate image captions and alt text
**Things that might helps us get this AI built (optional):**
* https://commons.wikimedia.org/wiki/User:Basvb/Deeplearning (sort on probability)
* Tensorflow: https://www.tensorflow.org/how_tos/image_retraining/
* https://commons.wikimedia.org/wiki/User:Multichill/Using_OpenCV_to_categorize_files
* https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html
* Could build a Wikidata-game like interface to provide training data or to assess predictions for inclusion in commons.
* Structured data on commons!
**Related efforts:**
* {T49492}
* {T76886}
* {T135993}