Currently, users can only search Commons images using text-based queries (titles, categories, and structured data). However, this approach is limited when users have an image but not the exact keywords.
I propose adding a visual search function that allows users to upload or paste an image, and the system uses AI-based image recognition (e.g., similarity search or embedding models) to find visually related files within Wikimedia Commons.
Benefits:
*Greatly improves discoverability of media content.
*Helps editors find similar or duplicate files.
*Useful for GLAM, educational, and research purposes.
*Can connect with structured data (SDC) to enhance tagging and metadata.
Possible Implementation:
*Integration with Wikimedia’s existing Machine Learning infrastructure (e.g., Lift Wing).
*Use of open-source image similarity models (e.g., CLIP, ResNet embeddings).
*UI element added to the Commons search page: “Search by image.”