Page MenuHomePhabricator

Proposal for GSOC 2019: Add Structured Commons support to Commons Android app
Open, Needs TriagePublic

Description

Wikimedia Commons App

Add Structured Commons support to Commons Android app

Mentors : Nicolas Raoul, Ashish Kumar
Vanshika Arora
IIT (ISM) Dhanbad
India - 826004

Personal Details

Name: Vanshika Arora
Email: vanshikaa937@gmail.com
Time Zone: UTC +5:30
Github Handle: @vanshikaarora
IRC Nick: vanshikaarora
University: Indian Institute of Technology (Indian School of Mines), Dhanbad
Blog: medium.com/@vanshika937
Typical Working Hours: Between 1pm to 12 pm

Synopsis

About the App
The Wikimedia Commons Android app allows users to upload pictures from their Android phone/tablet to Wikimedia Commons. Wikimedia Commons accepts only freely licensed media files (that are not subject to any copyright). Users can upload images and then add various tags specific to them such as category, title, description, and license.

About the Project
The project aims to move towards Structured Commons. The plan is to modify the current procedure of media details in Upload, Search and in Explore in accordance with the Structured Commons Property Table. The feature aims to use captions instead of Filename for Uploading, Searching or for showing Media. As specified here

Why this feature is needed?

  • Through this project we aim to improve the quality of details for the media uploaded to Commons. This feature will center the UI around captions, whereas currently the UI is centered around the filename. The caption would be similar to title, except that is multilingual and it is limited to 255 characters. Need for the migration from title to caption:
  • File names are non translatable
  • In explore showing title barely gives a substantial information about the image.
  • Description does provides the information yet that would be a huge wall of text. As it contains more details. Thus captions of 255 characters(max) would therefore be more readable than the description
  • Captions would be multilingual like the current description of images
  • While searching for an image in the "Search" activity, caption is more relevant over filename.

What can be implemented?

  • Adding Field for captions instead of title in Upload Activity
  • Making Captions Multilingual just like description of images
  • Adding Caption in media Details for viewing images from Explore and Contributions
  • Show caption instead of filename (in my contributions and Explore)
  • Adding “depicts” before Categories in Upload Activity
  • Implementing search for depicts

How it can be implemented?

1 . Structured Commons Captions property

  • Submit Captions to structured data
  • Removing field for Title with captions to make it multilingual
  • Upload this image with MediaInfo ID
  1. Show caption in media details
  • Adding a field for captions in media details
  • Fetching captions using MediaWiki API
  1. Show caption instead of filename (in my contributions and Explore)
  • Use MediaWiki API to fetch captions
  • Use captions instead of previously used titles in My contributions and Explore
  1. Structured Commons Depicts property
  • Adding Depicts in Upload Activity
  • Sending this data to the server along with other media details, possibly as part of the description if the server-side feature is not yet available as that time.
  1. Storing data like file type in WikiData
  2. EXIF(-like) data
  • Extracting metadata (like EXIF-information) from files
  • Storing the metadata in Wikibase
  1. Two-dimensional images
  • Extracting specific information from two-dimensional images
  • Storing these specific information in Wikibase

TimeLine:

Community Bonding Period:

  • Communicate and bond with students and mentors
  • Create specific issues for the project
  • Getting familiar with the app architecture
  • Getting familiar with the Structures Commons Properties table
  • Learning more about Structured Commons
  • Reading about the API’s to be used as a part of the project

Deliverables

  • Community bonding report
  • Blogs about the experience, app architecture

Week 1:

  • Removing View Flippers from Upload Activity as discussed here https://github.com/commons-app/apps-android-commons/issues/2402
  • Creating new UI for Upload with separate activities for each tasks:
  • UploadActivity >> that receives the images ie the first screen
  • DepictsActivity>> “What does this image depict”
  • CategorySelectionActivity >> that allows you to select categories
  • LicenseSelectionActivity that lets you select the license
  • Adding Captions(Multilingual) instead of Title in Upload Activity

Deliverables:
Modified UI of upload Activity with Caption instead of Title and Depicts Activity

Week 2:

Creating classes for each of these separate activities with previous code based on MVP architecture

Deliverables:
MVP based Upload Activity, DepictsActivity, CategorySelectionActivity, LicenseSelectionActivity

Week 3:

  • Studying about APIs of Structured Commons
  • Removing previous APIs for sending titles to wikimedia database
  • Adding depicts along with image description in media details to send via API
  • Implementing Structured Commons API for sending multilingual captions to Wikimedia database

Deliverables:

  • Complete(MVP based) Activities for Upload Activity
  • Sending of Captions instead of title to Wikimedia database
  • Sending Depicts along with description via API

Week 4:

Adding Unit tests for Each of the separate activities
Deliverables:

  • Unit tests for the Upload Activities
  • Show caption in contributions and explore

Week 5:

  • Study about Wikimedia APIs
  • Fetch captions from APIS in all locale
  • Modify local database for contribution to store captions in all locale
  • Display captions to user depending on user's locale if available if not then display in English
  • Display captions to user in media details depending on user's locale if available if not then display in English

Deliverables:

Contribution and Explore showing Captions in users locale

Week 6:

Add depicts field in media details and show the corresponding Depicts information
Deliverables:

  • Caption in Media details available in user’s locale
  • “What does this image depict” available in Media Details

Week 7:

  • Studying current search interface for Category Activity
  • Implementing the same Activity to search for "depicts" targets when uploading a media

Deliverables
Search Interface in Depicts Activity

Week 8:

  • Studying Current Search interface of Search Activity in Explore
  • Implementing the same Search for Search via “depicts”

Deliverables
Implementing search via depicts in Search Activity

Week 9

Unit tests for depict-based searches

Week 10:

  • Learning about APIs to update and read Media Type in Wikibase.
  • Extracting media type in the app and Calling Wikibase’s API to update Media Type at the time of uploading media from the app.
  • Calling Wikibase API to read Media Type and showing it in Media Details Fragment. (optional)

Deliverables:

  • Feature to update Media Type in Wikibase from the app.
  • Media Details Page with details about media type. (optional)

Week 11:

  • Extracting EXIF-like properties using the app (ex. - Equipment manufacturer, Camera model)
  • Learning about APIs to updateEXIF-like data in Wikibase.
  • and Calling Wikibase’s API to update EXIF-like data at the time of uploading media from the app.

Deliverables:
Feature to update EXIF-like data in Wikibase from the app.

Week 12:

  • If time allows,
  • Extracting properties of 2-dimensional images using the app (ex. - Resolution, location)
  • Learning about APIs to update properties of 2-dimensional images in Wikibase.
  • and Calling Wikibase’s API to update properties of 2-dimensional images at the time of uploading media from the app.

Deliverables:

  • Feature to update properties of 2-dimensional images in Wikibase from the app.
  • Writing Unit tests for all the remaining Activities (Contributions, Explore, Media Details)

Final week

  • Writing unit tests for all the activities
  • Working on project presentation
  • Improvement based on feedbacks received from mentors and members from other community
  • Manual exhaustive testing on different devices, emulators
  • Writing documentation for final submission

Deliverables

  • Unit tests
  • Project presentation
  • Other deliverables
  • Weekly report and blog

Participation

Progress Report

● I will remain online on IRC, hangouts in my working hours (1pm to 12pm UTC +5:30)
● I will write weekly blog posts at (https://medium.com/@vanshikaa937)
● I will share my blogs on twitter
● Write weekly scrum reports and update it to our mailing list
[commons-app-android@googlegroups.com]
○ What did I do last week?
○ What will I do this week?
○ What is currently preventing me from reaching goals?
● I will submit a Project Presentation
Where I plan to publish my source code
● I will be working on a separate branch on git and uploading code to the forked repository
almost on a daily basis, will be Creating pull requests when a complete feature is done.
Communication on task
● I will use GitHub to manage bugs and task.

About me

Personal Background
I am a second year student at Indian Institute of Technology (IIT) Dhanbad pursuing Mathematics and Computing. I am an active member of the Cyber society at our institute. I organise weekly Android Development Workshops headed by the society.
Secured fifth position at the 36 hours Hackathon organised by BIT Mesra
Secured a percentile of 97.5 % at the event Paridhan organized by Samsung
Secured top percentile in Microsoft Coding Challenge and also was invited for the Event Microsoft Codess
Selected for Smart India Hackathon 2019

How did I hear about this program
I heard about GSoC few months back when my college organised Winter of Code which was a month of open source hackathon. Since then I had been an open source enthusiast and wanted to take part in Google Summer of Code.
Time during Summers
I have no other commitments this summer. So I'll be able to give 40 hours or more per week. I am ready to commit extra time if needed in order to finish up the goals of the project. My summer break starts from 1st May , so I can start working full time from that day on. I'll not be taking any vacations. My classes start on around 29 July but I will be able to commit enough time for the project as there are no exams during the period.

Eligible for Google Summer of Code and Outreachy ?
I am applying for GSoC and outreachy. Since as I am eligible for both the programs. I am applying under Wikimedia Commons Android only.

What excites me about this project?
The kind of exposure and experience a platform like this would provide me is a huge reason for me to want to be a part of it. I have developed a lot of apps during my college days, but I always wanted to develop apps that really help people at a global scale. I find nothing more exciting than working for a company like Google with a goal as impactful as this one. I sincerely love the goal of Wikimedia Foundation "Global movement whose mission is to bring free educational content to the world” and would be more than happy to work towards it. It would be really great for me to apply my skills and contribute to such an organization.

Why should I be selected for the project ?
I have always been interested in open-source projects and have been passionately working on them. I can push myself to the boundaries, come out of my comfort zone and work things out. I am a very good team-player and can learn thing quickly and adapt myself. I have been developing Android apps for more than 1.5 years now and having contributed to open source projects, I believe I have the required skills to finish the proposed goals of the proposal. I have 2 months of inten experience, I believe I will follow the practices that will give best performance to the application. I have been contributing to Wikimedia Commons app for over 80 days now and will be contributing to it even after the GSoC period.. I have contributed a lot in the app (around 50 PRs) and I have knowledge of app architecture and MediaWiki API.

Past Experience

I have been doing Android Development back since last year. I have successfully completed two internships in Android Development. I have also published one App on play store. I also secured fifth position in Hack-a-bit a 36 hours Hackathon organised by BITS Mesra. I am a blogger and I frequently write blogs relating to Android or java at my handle medium.com/@vanshikaa937. I have experience of working with Google Cloud vision APIs. Also I have experience of working with Django use git and GitHub every day, and I am well acquainted with using them for version control.

Intern Projects:

Bobble AI technologies

Roles and Responsibilities

  1. Responsible for introducing Dynamic Whatsapp Sticker's into Bobble app.
  2. Converting the existing Bobble png images to suitable format and add it to whatsapp as whatsapp stickers.
  3. Introducing the Driving Assistant in Bobble app. While driving the app reads out whatsapp notifications to the user and then records their response as an audio to be sent via WhatsApp.

Challenges Faced:
The sample application for WhatsApp stickers. stores the images in the project raw folder. I have created my application such that it loads png’s from various API’s edits the images thus loaded, then add it to WhatsApp without adding any of these images to the Raw folder.
Provide most efficient working of the application, i.e the process of loading, processing and transferring image should not block the UI or speed of the Application.
Internship Certificate- https://drive.google.com/open?id=1JajUpPDIC77sOAARTRXe-x4nehsYs_-P

MindOrks

Worked Upon:
An Android Application that supports Audio and Video
The Application uses MVVM architecture for maintenance and testabilty
Other Components like Dagger and ButterKnife were also used in the development of the Application
Learning
Through the Application I have learnt various modern components in Android Development like MVVM Architecture, ButterKnife, Dagger, Data Binding and Kotlin.
Internship Certificate- https://drive.google.com/open?id=0B9DOFtsZNXVBYzJmdVdGb3V4dEdPNVRvWlZFRm9aZnN4VjJZ

Self Projects:

GoogleSTT

Github: https://github.com/vanshikaarora/GoogleSTT
This android application has the following features:

  • It allows Stop listening | Start listening modes by pressing Mic
  • Allow user to correct | edit the spoken text.
  • Show list of sentences as user speaks to give user a choice to select from previously spoken sentences from database.
  • Also spoken text is kept in realm (in-memory database)
  • Sentences are retrieved from database using Trie, as soon as user enters
  • words or speaks it can display set of sentences
  • Offline caching in Realm also all the data is sent to firebase
  • It firsts displays list of sentences from Realm and when data is fetched from firebase more sentences are added to the same list

Med.ai

Github: https://github.com/hackabit18/Defaulting
This is a prescription tracking android app which minimizes user input. The user needs to scan the medicine through this app and using Google Cloud Vision API for OCR we extract medicine name and get all relevant information about the medicine’s from our database. The database was obtained by scraping data from various websites.

Concetto (Available on Play Store)

Play Store link: https://play.google.com/store/apps/details?id=com.perul.vanshika.concetto

This Android Application currently available on play store was designed for the annual Techo- management fest(Considered as the largest fest of eastern India) Concetto of IIT(ISM) Dhanbad. Users are allowed to register via this Application Also the app displays details of all the events, (e.g their timings) of the app.

Brainy

Github link : https://github.com/vanshikaarora/Brain-Image-Segmentation-using-ML

The goal of this project is to develop segmentation methods to fragment features of the brain like white matter, tumor etc in 3D, based on ML techniques, which require no human intervention and are robust to the low quality of medical images, with user friendly GUI. The GUI of the project was developed using Python in Flask

JustDial Scrapper

Github link: https://github.com/vanshikaarora/JustDialScrapper
The Python based project uses Beautiful soup to scrap the relevant data(Name, address, phone no. and other details of firms) from Just Dial website

Other Experiences

Blogs published at various publications:

Published at Bobble Engineering

Relevant Skills

  • Familiar with Modern Architecture Components: RxJava, Dagger2, WorkManager MVP and MVVM pattern
  • Experience of Android Development in Kotlin
  • Familiar with writing Unit tests in Kotlin
  • Basic knowledge of Wikimedia APIs
  • Android UI design
  • Python for Django and Flask
  • Github Workflow and Git Version Control
  • Ubuntu

Contributions to Wikimedia Commons App

I’ve been indulged with this for the past 80 days, from that day itself It has been an enriching experience contributing to the app In this time I’ve solved some major and minor bugs and also implemented some new features in order to get familiar with the code base.
Pull Requests:
Link to my Pull Requests merged in this project
Link to my open Pull Requests

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald TranscriptApr 5 2019, 12:41 PM
Aklapper renamed this task from Proposal for GSOC 2019 to Proposal for GSOC 2019: Add Structured Commons support to Commons Android app.Apr 5 2019, 12:45 PM
Aroravanshika updated the task description. (Show Details)Apr 8 2019, 5:27 PM

If you would like us to consider your proposal for review, please move it to the submitted column on Google-Summer-of-Code (2019) board.