Name: Lalit Suthar
Web Profile: https://lalit97.github.io/
Location: Bikaner, India
Working hours: 7 pm to 11 pm and 8 am to 10 am (UTC +5:30)
Short summary describing your project and how it will benefit Wikimedia projects
Wikidata is one of the largest hosts of open data. Missing or new information needs to be added by community members. Otherwise, the knowledge base will dry out and be obsolete after a while.
My project's goal is to create an ecosystem that will consist of Web service interfaces to push data from different sources, s.t. researchers. The industry can easily integrate it into their systems and efficiently help to extend the Wikidata knowledge base.
The impact of this will be data donations will be possible directly via different sources. The process will become easier and less time-consuming for contributors. Hence, the quality and quantity of the open data offered by Wikidata will increase.
I will later try to implement a recommendation engine that suggests agreeable facts to Wikidata editors that are likely to be in their area of expertise. Also, gamification features to show appreciation to Wikidata editors, e.g., a badge Web service allowing users to integrate their score/rank into their profiles on social networks. That will be encouraging for the editors.
Have you contacted your mentors already?
Yes, I have contacted them on Phabricator and Slack.
Describe the timeline of your work with deadlines and milestones, broken down week by week. Make sure to include time you are planning to allocate for investigation, coding, deploying, testing and documentation
- Implement Webservice interfaces for data donations, i.e., sets of approvable facts (which should also contain links to evidence, etc.)
- Implement a recommendation engine that suggests agreeable facts to Wikidata editors that are likely to be in their area of expertise.
- Extend the gamification features to show appreciation to Wikidata editors. Example: a badge Web service allowing users to integrate their score/rank into their profiles on social networks (e.g., on Wikidata’s user page, GitHub profile, Linkedin profile) to show their dedication and activate other users.
Assign a Badge according to the total number of contributions. 0-10: Bronze, 11-30: Silver, 31+: Gold
(IMG: sample mockup of the first task)
|May 20 to June 12||(Community Bonding Period) I will brainstorm, discuss and plan all the implementation details on what should be the CSV format we should accept, which libraries should be used, etc. Set up wikidatacomplete on local. Learn more about SPARQL. Read about the implementation of the Mix'n'match tool.|
|June 13 to June 19||Add new UI and minimal API to give file upload option and handle data in backend respectively.|
|June 20 to June 26||Add several validation checks in the backend for the uploaded file to ensure security and quality of data.|
|June 27 to July 3||Extract item, property, and value from validated CSV rows.|
|July 4 to July 10||Understand the current flow of wikidatacomplete to generate new facts with help of newly generated items, property, and values. Map them to new facts which can be approved by editors later.|
|July 11 to July 17||Code cleanup. Testing, Fixing Bugs and errors.|
|July 18 to July 24||Write documentation and a blog post. Go for deployment.|
|July 25 to July 31||Phase 1 evaluations. Get feedback from mentors and complete functional testing.|
|August 1 to August 7||Fetch user's previous contributions using Mediawiki. Get count how much one is contributing to each category.|
|August 8 to August 14||Recommend user facts on basis of his/her top categories with help of count saved in the previous week.|
|August 15 to August 21||Fetching the data from the user's Wikidata account to count the total number of contributions. Implement sharing user contributions on the Wikidata user page.|
|August 22 to August 28||Implement sharing user contributions on Github and Linkedin.|
|August 29 to September 4||Work on Performance improvements and finish end to end functional and unit testing. Fixing Bugs and errors.|
|September 5 to September 12||Blog posts. Fix remaining bugs if any. Complete Documentation. Deploy and get feedback from the community and users.|
|September 13 to September 19||Mentors submit final GSoC contributor evaluations.|
|September 20||Initial results of Google Summer of Code 2022 announced.|
Describe how you plan to communicate progress and ask for help, where you plan to publish your source code, etc
- I will use Github to version control my code. Each Phase will be a new branch and will be deployed in an iterative cycle.
- I will get feedback on code and functionality after each deliverable.
- I will be active on Zulip and Slack to communicate with mentors.
- I will talk to mentors on Zulip/Slack for any design decision, feature implementation, bug, or code review discussion.
- I will write blog posts regularly about my experience and learnings from this project on Medium (Lalit Suthar).
Your education (completed or in progress)
I graduated with a bachelor's degree in IT from Engineering college Bikaner in 2020.
How did you hear about this program?
I heard about the program from my college seniors.
Will you have any other time commitments, such as school work, another job, planned vacation, etc, during the duration of the program?
I am working as a software developer at Redhuntlabs. I do not have any other commitments.
We advise all candidates eligible for Google Summer of Code and Outreachy to apply for both programs. Are you planning to apply to both programs and, if so, with what organization(s)?
No, I am applying for Google Summer of Code only with Wikimedia.
What does making this project happen mean to you?
I have learned a lot from the open-source community. The people are being so nice and helpful here. My interaction with the community helped me in getting a job in software.
I will feel special if I get a chance to at least give something back to the community, which will impact a large number of people.
Please add links to any feature or bug fix you have written for a Wikimedia project during the application phase.
|Open||T302882||Don't create a duplicate Bundle authorization when waitlisted resources are moved to Bundle||#988|
|Merged||T299883||Enable linking a suggestion to a Phabricator ticket||#980|
Describe any relevant projects that you've worked on previously and what knowledge you gained from working on them.
- Krishna Lessons: Twitter Bot which posts one verse from Bhagavad Gita daily. Live at krishnalessons.
- Covid hospital tracker: a crowdsourced initiative to track Covid hospitals in India. Live at covid-hospitals-tracker.
- Feedfetcher: News aggregator application, which collects a summary of news from different categories from given RSS feed URLs. It processes the RSS feeds asynchronously to get news content.
Describe any open source projects you have contributed to as a user and contributor (include links).
|Closed||T268404||Suggestion for adding more commands in README||#554|
|Merged||T262904||Show only recently filed applications, and their submission date, on application evaluation pages||#529|
|Merged||#496||Test class added in another class in application/tests||#498|
|Merged||T186502||Provide a list of applicant's other recent applications on review page||#453|
|Merged||T206553||Add a new application field for waitlist status||#446|
|Merged||T239503||Handle modification of application status from INVALID to anything else||#429|
|Merged||T212767||Provide error message when navigating to Not Available resources||#427|
|Merged||T218857||Add more links to activity feed||#414|
|Merged||T170113||Add 'Back-to-top' button for partners list page||#410|
|Merged||T226369||Add condition to check user is current coordinator||#406|
|Merged||T234551||Show Error on 'Mark as Sent' page for automatic-send partners||#405|
|Merged||T193334||Add placeholder text to Review filters||#404|
|Merged||T243746||Add function to handle file name given as url||#573354|
|Merged||T243311||Change 'VideoTrim Settings' position above the 'Step2'||#569217|
|Open||T216400||Allow wildcard project searches||#15|
|Open||T216399||Allow hashtag project searches||#10|
|Closed||ZubHub||Docker related steps in local setup guide||#328|
|Merged||Django Rest Framework||Fix broken article link||#7918|
|Closed||Stripe||Unable to trigger requires_action event||#624|
|Merged||FindAroundYou||Upgraded django and gunicorn||#11|
Activate the WikidataComplete plugin [6,7] in your Wikibase account to work with the current implementation
Activated the wikidatacomplete plugin through instructions given at #installation and made a couple of edits using it.
Select 3 Wikidata entities and manually find missing facts based on external data sources
Made edits in Wikidata by
- Manually finding missing facts based on external data sources (Tag: Wikidata User Interface)
- Using Recoin ([Edited with Recoin] (Wikidata:Recoin))
- Wikidatacomplete UI (Tag: WikidataComplete-QACompany [1.0])
- Wikidatacomplete Plugin (Tag: WikidataComplete-QACompany [1.0])
List of all edits: Lalit Wikidata contributions
Check out this tool https://www.wikidata.org/wiki/Wikidata:Primary_sources_tool
Read about the primary sources tool from the given link. Could not see it in action because it is currently down for a week.
Understand the Wikidatacomplete UI and APIs [4,5] which is currently considered to be the (default) data donator
- to get facts about a given Wikidata entity
curl --location --request GET 'https://qanswer-svc3.univ-st-etienne.fr/facts/get?qid=Q10953517&format=json'
- to get a fact for a random Wikidata entity
curl --location --request GET 'https://qanswer-svc3.univ-st-etienne.fr/fact/get?id=EMPTY&category=EMPTY&property=EMPTY'
Get familiar with the data structure available in Wikidata
Set up a MediaWiki development environment