Page MenuHomePhabricator

Wikimedia Hackathon 2019 Showcase / Closing
Closed, ResolvedPublic


Showcase is on Sunday from 3pm - 6pm in Balling Hall run by Siebrand and Rachel.

Participants will get to showcase their hackathon projects and we will close out the event and say goodbye. Presentations are limited to 30 total, with 2 minutes each.

Note: the contents below were originally added to the this Etherpad document, and moved here post completion.

------------------------------[ ^_^ ]-----------------------------------

🥳 Wikimedia Hackathon 2019 Showcase & Project list 🥳

Prague - on Sunday from 3pm - 6pm in Balling Hall run by Siebrand and Rachel

I / We want to be in Sunday's Hackathon Showcase (limited to 30 total, with 2 minutes each)
Not presenting or leaving early: here is some info documenting our project.

I / We want to be in Sunday's Hackathon Showcase (limited to 30 presenters, with 2 minutes each)

Fill in a section below with your details (remove un-needed lines) - replace the placeholder text!
If possible, please explain your project without using a computer
The second best option is to add a link to this etherpad and use the provided laptop, from where you can access any demo you need.
You cannot use your own laptop. Sorry. There are many project to showcase and we can’t give extra time.
If you do not wish to be filmed, then please write that in the special requirements section, and we will move you to the end of the showcase. :-)

Deadline for showcase registration is Sunday at 12:30.
Just before the showcase, we will trim and reorder this list. Presentations will be from top to bottom. If you are not on the list, you unfortunately have not been selected to present.
Between 14:15 and 14:45, you can test on the presentation laptop if so needed in Balling Hall.


Brief : Developers on Toolforge can now issue log messages from the command line, using the dologmsg command. Previously, you would have to be in IRC and send !log messages in #wikimedia-cloud.
Video: (38 s)
Contacts: @LucasWerkmeister / @Lucas_Werkmeister_WMDE
pre-load the video so it doesn’t have to load during the demo, I suppose (video is on desktop)
feature is deployed and available for everyone, announcement email to cloud-l will be sent right after showcase
Status: ????
Speech by Lucas Werkmeister: Message in IRC, Deploying tool that generates log message, "dologmsg" command. Don't have to be on IRC. 😭


Brief: Wikisource gadget that will derive text from the current page's image via Tesseract.js OCR.
Contacts: @putnik
Status: user script available
Putnik: Tool that helps to recognize text on Wikisource page, based on Tesseract.js library in pure JavaScript that can work right in your browser on every Wikisource project. It adds button that provides text recognition from image for Page namespace in almost 60 different languages. You can enable it for personal use on Wikisource.

  1. SCRIBE -

Brief: A prototype for a tool to support editors in creating new articles by structuring the sections and suggesting references.
Contacts: @ondrej.merkun, @Isnit001, @Lucie
Status: prototype, user script available but rather proof of concept
Andra, Joe, Lucy - Scribe helps editors to create high quality articles. Prototype now working! User searches for term on Wikipedia- doesn't exist on Czech Wikipedia, they click new article, VE shows up, and then our tool shows up. Suggests sections headings from same language version articles, and suggests references. Finds suggested headings from other related articles. Finds references from newspapers and web-search.

  1. FLAGGED REVISION ICONS - and Secure log out

Brief: Improving former icons in FlaggedRevs.
Contacts: @Ladsgroup
Status: live next week(s)
Speaker: Amir
Speech: Presenting 2 things.
(1) Flagged revisions icon improvements. See and approve [[them?]]. User interface updated.
(2) Before this, logging out didn't require a token, and no token was sent, no CSRF. Deprecated doing logout without csrf token.
Limitations: You could mass log out users by spamming a link. Now logging out without token gives an error. And doing AJAX post-request with secure instead of GET request.

  1. WDQS - GPS-friendly download of query result & Download of RDF Formats - &

Brief: download query result as GeoJSON, GPX, and KML for data containing geolocation, and RDF Formats for CONSTRUCT queries , (query #1), (CONSTRUCT)
Contacts: @Peb
Demo: (GeoJSON), (GPX), (KML), (RDF), (JSON-LD)
Status: beta deployed on my site (outside wm), deployment still requires other issues to be resolved (deciding whether the format suitable for the current result, and UI/UX usability issue to handle with menu organization)
Peb: 2 tasks on Phabricator.
Problem: only CSV or text for geodata or graph. Other formats not available at Added options to download n-triples or JSON-LD.
Can now do format conversion without sending another request. Now you get jsonld data view of query results so now you can use for other applications.
Can now download the query results containing geolocation in a format that is GPS friendly.


Brief: WikiGrade is a jury tool for article contests. While it has already been successfully used for three article contests, we would like to develop it to the stage where it could be used by other chapters as well.
Demo: |
Contacts: @Yury_Bulka
[Technical]: notes for myself: login username: Велосипедист
Status: alpha-deployed
Speech by Yury: Will show last version of article within the contest timespan. It will compute a score, and the ranking of the contributors, to help with prize allocation. Still alot of work to do: need to create an admin role so that others can login and create contests with it.


Brief: Logging in without requiring the user to enter a password, just by pressing a button on a pre-registered smartphone.
Contacts: @Florian
Status: Proof of Concept developed
Speech by Florian: Extension that allows you to login using your phone as authentication instead of a password.


Brief: A showcase of interactive demo apps built using the MediaWiki Action API.
Contacts: @srishakatux & @Tuxology
Where is this feature in its lifecycle? Examples: Early conception / ready for beta deployment / beta deployed / feature flagged / production / ...
Speech by Srishti: Previously worked on improving the action API docs, code samples, and things that they built using Mediawiki-Action-API. Some people shared cool examples. Twitter and similar have galleries of tools. I made a gallery of interactive demo apps. Thanks to volunteer/friend tuxology who contributed remotely. Also thanks to Volker and Robin for great design feedback,.


Brief: With the upcoming release of ShEx in Wikidata there will be a need of more introductory material about ShEx usage in Wikidata. We started to document a possible workflow about the different stages where ShEx schemas can be useful to improve Wikidata quality.
Contacts: @Jelabra, @Andrawaag, @Tombakerii
ShEx project: ready for beta deployment
ShEx Documentation for wikidata: Early lifecycle

[Andra]: Wikidata qualitity improvement is perpertual process. [See video] Wikidata items about manga/anime are a mess because they inherit from Wikipedia articles covering a manga, its anime adaptions, 1 or 2 movie, and a couple of video games. Shape expressions (ShEx) can help with this process. ShEx can extract the schema about a certain type of item (eg, manga). It can also help with splitting out the anime item from the manga item. A ShEx plugin will be available at the end of May 2019.


Brief: Voice assistants like Alexa, Siri, or Google Assistant are all read-only. What if users could talk back and become first-class participants in our projects?
(What are these links ?)
[Password protected] -- private development interface for Alexa Skills Development
Wikidata sandbox:
(Screenshot of the interaction)
Contacts: @EvanProdromou
Status: Prototype
Evan: 2 out of 5 adults in North America use voices search on daily basis. 500 million weekly users of Siri. Huge number getting info from Wikipedia/Wikidata, without having the option contribute back.
worked on Amazon Alexa Skill (app that can be installed with Alexa) does a 2-way interation with Wikidata. First queries Wikidata and gets properties out of Wikidata. Will also take qualitative and quantitative feedback. So you can say "hey that fact doesn't sound true" and it will post a comment to the talk page on Wikidata talk page saying this seems sketchy. Will also do direct updates on wiki entitites, saves it to database, allows 2 way interactions, and is REALLY cool.


Dutch Caps-lock strikes again. -- its script independent. This hurt, but I shouldn't have to log in for a demo. Private UIs ftl


Briefs: Experimental solution to provide real-time-ish chat to users in MediaWiki, the idea being to better support new editors and offer human-to-human help. The system allows for use of regular curation/moderation tools in MediaWiki.
[What is that]
Contacts: @kostajh
Status: Experimental, very hacky, early proof of concept
Kosta: Improvement to the Help Panel built by WMF Growth team ( ). Makes onboardng process more helpful. makes help panel so while editing you can search for links and get help with editing. Will make edit on helpdesk with section header so people who check wiki when it's deployed can see what it is. As a newcomer you have to check notifications to see if someone replied. So implemented that newcomers can see this in real time with real efficiency. Here's a proof of concept realtime chat. Queries the recent changes feed, changes stuff, and pulls up updated content to show you stuff. If you wanna add new stuff like emojis then reach out. to see the content model, test out the chat etc. Will leave it running for the afternoon


The MediaWiki action API gains a new module, action=query&meta=languageinfo, to get information about languages supported by MediaWiki, such as fallback languages, variants, or writing direction.
currently in code review on Gerrit, will hopefully be merged and deployed to production soon
Lucas: If you wanted info about languages MediaWiki supports 12. You get certain info about each supported language (JSON blob).


Brief: Service for end users to see what Wikimedia and other openly licensed media is available related to each Wikidata item
Achieved: Indonesian and Russian translations; We added Creative Commons Search to Wikidocumentaries image sources. Some sources were filtered out, either because we have more complete data already or that some sources were less relevant from the project perspective. The choice of sources will later be made adjustable. Reading and formatting the input is not finished, current query response is still be incomplete (thumbnails missing, bad matches), performance trouble in the UI. Further work will include first fixing these and later tools for mapping metadata of individual images to Structured Data on Commons properties and importing the images to Commons when this is done.
Contacts: @Peb, @putnik, @TuukkaH, @Susannaanas
Status: ready for beta deployment --> pending until bugs have been resolved
Tuukkah: demos what media is available in each language. When you scroll you see media from wikipedia, wikidata, related people and timelines and something new is creative commons search. so you can add plugins for sourcing other things e.g. CC Search covers Metropolitan Museum of Art. You can view photos and other media related, for example on a timeline. So you get metadata in terms of keywords from wikidata.


Brief: Generate dashboards of property coverage for a given part of Wikidata. Self-service, configured/invoked using a template on Wikidata
Links: (source:
Contacts: @JeanFred (based on original idea by @Multichill )
Status: LIVE on Wikidata (although unlikely to sustain very highload, timeout issues, basically no error-handling, etc. ^_^)
Speech by JeanFred: Who loves Wikidata? [[Applause]]
Dashboard of paintings in Wikidata and their properties (completeness). Thougt: let's do it on demand for everything so any product you have you can do it. The way it works is silmilar to something Listeria bot you can give it all the items such as instance of painting, columns we're interested in are then displayed on dashboard. Wikidata Video Games (great project you should definitely join) you can do it for everything all instance of beers by country. Should show completeness for any subject area. Please file bugs!


Brief: In-browser push notifications for Echo
Links [what is this]:
Contacts: @Catrope
Status: Cleaned-up / improved version of my Montreal 2017 hackathon project. Needs a little more work before it's ready for review
Project: Wikipedia? All?
Speech by Roan: Worked on echo push notifications. At this hackathon made the code not hideous and more efficient. Made it so you can enable and disable these things. In preferences -> notifications there's a new column for push notifications for each notification-type. Can enable push notifications asks if notifications from this domain. now plants in audience can ping me on my user talk page and now I can see my new notifications when I'm not on browser (also works on mobile).


Brief: Upload pictures directly from your Google Drive to Wikimedia Commons !
Demo: (video taken down).
Contacts: @srishakatux @zhuyifei1999 @01tonythomas @Chicocvenancio
Status: Prototype, early pre-release. Coming up in soon.
Team: needs more forces.
Speech by Tony: we came up with tool for you to upload files from google drive to commons. Opens a dialog that shows you the files being transferred.


Brief: Create a gadget that displays on a Wikipedia article the quality level of the related Wikidata item. Next steps: compare scores to average quality scores of a random Wikidata sample
[What is this?]:T223590.pdf on desktop.,
Contacts: @Arybolab
Access: need to be logged in to Wikipedia with my user name to browse with my personal user configs
Where is this feature in its lifecycle? Examples: Early conception / ready for beta deployment / beta deployed / feature flagged / production / ...
Speech by Anna: 3rd time doing coding for Wikimedia! I wanted to visualize wikdata content like clustering datapoints, though I learned that the expectation is that something the data is too incomplete to get useful insights at this stage. So I decided wanted to be able to monitor the quality of Wikidata while using Wikipedia as a user. The Gadget can be activated on a user account and will quality levels (calculated by ORES) for the related Wikidata item on the Wikipedia page I looking at. Based on an overview of important people I found 2 examples how the gadget works. You can see that 'Sargon of Akkad' has a C-level quality rating, so this would be a good candiate to complete for editors. You can see Douglas Adams has an A-level rating. Next step: Want to have some random samples of data to get an average quality score for wikidata items and quality gap measures.

  1. Common templates in the visual editor, and a surprise live interactive demo - (common templates) (ISBN scanner)

Brief: Wikis want the ability to list common templates inside the visual editor.
Contacts: Editing team, @Esanders
Status: in code review

Common templates: couple of wikis complained template dialog didn't give much guidance if you're new so now there's a new JSON message for common templates. Will pull in templates and list them as quick links for easy access. Could be improved to have templates directory for infoboxes, conversion templates etc.

Scan ISBN: Better ways to contribute on mobile phones for those adding references. Visual editor use phone camera to scan ISBNs and get an automatic reference.


Brief: Develop a new home screen UI and Explore UI for Commons app.
Contacts: @josephine_l, @maskaravivek, @Sharvaniharan, @Ujjwalagrawal17, @schoenbaechler, @Ashishkumar468
Status: Unstable alpha on development branch

We joined commons app team to work on commons app. Improved current navigation which is hamburger icon on top left beforehand, demo is after visual display. Moved away from left side hamburger to bottom-right hambuger and bottom-left toolbar navigation. Before and after images. Bottom navigation has advantages so easier to discover easier to reach with thumbs and everything is all these wins against things that are hidden. This is explore feed, before and after we are currently working on implemening it. Prototypes and mockups given of new UI design which is cleaner and more of a white space approach, removed blue a bit to make the images shine more. Technical challenges while making surgically removing UI from top and placing at bottom we had to separate items in code because not decoupled properly to change UI elements.


Brief: Give users the option to sort search results by date created, date last edited or by relevance (default).
Contacts: @Tonina_Zhelyazkova_WMDE
Status: work in progress, browser tests missing/failing.

Tonina: Feature for Advanced Search. You can sort searchs by date edited or created. By default shows results by relevance. Implements keyword search in CirrusSearch. Uses OOUI , work in progress bc browser tests not finished. Ticket for more browser tests is submitted on Gerrit.

  1. THICC -

Brief: Some sort of content-model structured voting and discussion system.
Contacts: @Isarra, @Magwac, @Bawolff
Status: had fun exploring. :D (if I understand well ^^)
Isarra: Trying to make voting extension, voting and discussions with threaded stuff. How do you scroll on a mac [sic!]. tried to make a mockup. This is how voting works on uncyclopedia. We wanted to make it so people couldn't screw up threading with comments and replies. Discovered why other extensions made for this suck. Didn't get very far, we have some threading and it's json. we want to add interface to edit this properly then we can implement the actual voting like we wanted intially. We wanted to use multi-content revisions but it doesn't support this use case. You can only have one slot of particular page at a time, lots of issues because used for wikicommons use cases only (RfC described others). Would be nice to implement it but that person isn't here anymore. Can only transcluded wikitext. MediaWiki only supports wikitext transclusion; has TODO comment for HTML tranclusion; would be nice byu t way overt our heads So lots of \ways to go with this.

Good presentation! And complaints are good.
Talk pages consultation, ongoing at the moment, may surface the need of voting systems.


Brief: The namescript Wikidata user script now has a documentation page, and descriptions for kanji names were improved.
Contacts: @Harmonia_Amanda, @LucasWerkmeister
namescript has existed for a while, and this latest improvement is now in effect; other improvements are likely to follow in the future

Lucas and HA presenting together: Wikidata items have one item by string for names, so different writing system should be different items. It's difficult to add other descriptions if you don't have name in given language. Fixed problem with japanese names because unicode doesn't make difference between kanjii and Chinese writing system. Difficult to disambiguate between 2 writing systems. Made it better so it will work. You have the native label, and namescript adds e.g. latin name for name and label. Non-latin descriptions give latin definition in parenthesis: "レンヌ (Rennes)". Documentation on Wikidata so you can use it yourself.


Brief: Cleaning up the Score extension with TimedMediaHandler
Achieved: mostly refactoring work -> Prototyping changes
Contacts: @Ebe123 (Étienne Beaulé)
Status: Score extension in production, changes in Gerrit (prototype)

Étienne: Has a physical sheet of paper and difficult to present , "Can't show it on computer, so you have to believe me." This is what it used to be (unintelligible complex diagram) and now it's better (slightly less complex diagram). [Loud applause ensues] Now when you upload midi files it transcodes automatically on Commons and Wikipedia pages so you don't need to download the file and run it through a synthesizer.


Brief: Worked with the Growth Team of WMF to implement two new features for the GrowthExperiments extension: Allow mentors to configure their presentation (T220145) & Create GuidedTour to help users understand how to follow up on their questions with notifications (T220146)
Contacts: @Tim_WMDE
Status: Ready for beta deployment, may need additional reviews first
Tim: I am from Wikimedia Germany. This weekend joined Growth team.
Homepage project ( introduces mentors for new users so they automatically assigned to users. I added a couple of features to extention allowed mentors to get cdescirptions swo more personal bc previously it was text saying hey this is your mentor. Now each mentor can have custom text. Now if you add text next to name it will show up on user's dashboard. You can see this text on your mentor section below.

GuidedTour about Notifications
Users can ask questions on Homepage through a dialog on page but they found that users might not be aware that notifications popup at top. Added info box saying where you will get response. They might ask questions click done and then it shows a popup alert specifying where you can expect to find information. Now users are aware when there's a new notification.


Brief: Cite Unseen is a Wikipedia user script that evaluates sources and inserts icons that help Wikipedia readers quickly and easily evaluate the sources used in a given Wikipedia article, checking for the potential orientation of a source and its possible ideological biases.
Contacts: @SuperHamster (Kevin Payravi), @Sky_Harbor (Josh Lim)
Status: Deployed as a user script (in beta)
Speech: Cite Unseen tool identifies sources in Wikipedia based on meanings of idealogical sources. Basically verify sources as trustworthy. Different classifcations of source-types, e.g. Biased, Gov controlled press-release, newspaper. Tabloids, books, anything can be used right now. Icons to left of each source represent the type of source such as book, press release, news source, opinion piece, etc. Added these icons so readers can get a glance at sources l get an idea of where sources are coming from data dump will list all URls currently used on Wikipedia (not done because lots of resources out there). Please add to our source list on github. Only tested on english wikipedia but plan to deploy on other wikipedias.


Brief: Structured data on Commons has been introduced, but we don't have any bots yet that can migrate data. Build some proof of concepts.
Digital representation of:
Contacts: @Multichill (inspired by the Pywikibot meet-up)
Status: Proof of concept/prototype
Maartin: Pywikibot structure data on Commons not supported yet. Made Proof of Concept to see if possible to edit such data on commons without a robot. First figured we have picture of the day on Commons, each one already has multilingual captions. So I imported captions from picture of the day, if you scroll all the way down you will see it's under captions section. So we already have 5k nice multilingual captions without any extra work. E.g. Frog image, can import the big statements with the robots. API just allows you to edit anything so was able to add other properties. User interface will not like this but they will probably fix it soonish. Bug exists where it adds based on IP would like to implement so it edits through wikibot instead.


Brief: Announcement of this year's Technical Conference
Contacts: @greg, @Bmueller
Status: announcement.
Nov 12 - 15 IN ATLANTA GEORGIA, USA Y'ALLL :D :D :D!!!!! Topic: Making developers lives easier. TLDR: please come.
will send out nomination form you can nominate self or others including who and why and what you wanna talk about. in couple of months will begin making selections.


Brief: Writting a state of the art tutorial on OOUI framework.
Achieved: We wrote a tutorial on how to create a larger-scale application with OOUI that follows the Model-View-Controller pattern to make it more maintainable. In a series of refactoring steps we show how the "Todo List" application we created in the basic tutorials goes from a collection of Widgets to a full-fledged application with proper separation of concerns.
[What is that?]
Contacts: @gabriel-wmde, @Mooeypoo
Status: Work in progress
Gabriel: Used todo app, and refactorings, to show how to move from bunch of javascript to a well structure d application, where all is seprated into classes and responsilbities. Work in progress, hope to finish example codo and intro text by end of today. There's a pull request on the issue currently.


Brief: Don't write SPARQL queries by hand; instead, drag and drop easy components (Code: )
Contacts: Michi (@MichaelSchoenitzer_WMDE, @MichaelSchoenitzer) with help from others
Status: working prototype
Michi: Wikidata Query Service in Blocky. [[Talking and showing how to put together the query graphically]]
In the Wikidata Query Service, writing queries is hard. So I made a prototype for this and a new interface, graphical programming language, you can get the blocks of stuff you need to build queries. Uses block element UI design to create queries. [Shows demo] Would like to make it more user friendly and results of table more presentable and exportable.


Brief: Visualization experiment for Scholia
Demo: (didnt work on my chromium)
Contacts: @Fnielsen (Finn Årup Nielsen), @Alicia_Fagerving_WMSE
Status: Early conception
Finn: Develop scholia (web app that takes Wikidata and does[kayes on toolforge. Would like to have more flexible visualization. l tried to learn how D3 can be combined with feedback from query service. Have query for applications of reseach and topic of implications and putting them up in a D3 air plot. One point we will be able to include these new visualizations like this time block is embededding from timeline bot from wikkdata query service. Recently got grant from foundation to look for a developer and a front end developer to get to these tasks (for visualizsations).wrfsvx

Closing session speeches

Klara: Thanks from WMCZ, for everyone who came all the way to Prague, and engagement during the event. Hope you had a chance to make new friends. Want to thank all the partners: National Library of Technology, Campus Dejvice and Cafe Protoru.
Most of all thanking Hackaton team, always willing to help. Martin, Petr, [clapping and inaudible], Tomas, megan at registration desk [clapping]. People from Czech community, Martin, [more names?] Andre. Thanks to Hackathon mate Natalia. Made it all come true from plan to reality. Great coordinator and person. Least but not last [sic!] goes to Rachel. Who supported us through from preperation process, the event, and helped calm us when we were panicking.
For tonight: The gallery is still open, dinner is here again tonight. You can stay until 2am again, and cafe is again open after that.

Rachel: Update your phabricator tasks! Don't leave Andre with a graveyard [laughter]
We will send feedback survey. We change these events every year based on your feedback. We want to hear back. Send email or tell us on phabricator.
We will have Hackathon in 2020, but not confirmed where it will be yet. Some organizational backlock. Be patient.
We also want to help with other local events - if you want help, send us an email.
Finally: What a pleasure to work with WMCZ. Please another round of applause. Especially for Klara and Natalia. And for everyone here, you all contributed to this event - please applaud for yourselves!

Klara: The final sentence we would like to teach you something on the end. The Czech word for "good bye" is "na shledanou". ( IPA: [ˈnasxlɛdanou̯] )

Not presented, but here's some info documenting our project

Flickr Dashboard
Edit photos on Commons and Flickr at the same time.
Team: @Samwilson

Saami + Romani language improvements
Description: Improve localization for several Saami and Romani languages.
Status: Partially merged but not deployed yet, partially still in code review
Team: @Yupik, @Lucas_Werkmeister_WMDE, @Zache, @siebrand et al.

Description: Various improvements, the biggest one involves the migration to mediawiki.router (+74, -789)
Team: @simon04, @Jdlrobson, @thiemowmde

Wiki Loves iNat
Description: upload much needed photos ftom inaturalist to commons
Demo/URL: (code at )
Team: @Jdlrobson

Tool to trim videos in Commons Phabricator:
This is used to trim (crop) videos in Wikimedia commons and uploads back to commons.
This hopefull saves users time 10 times faster.
Contact: @Gopavasanth

Partially fixed Highlighting of users sometimes doesn't apply to out of ares revisions
Above rev slider patch is done along with @WMDE-Fisch

Description: Create and document a unified, user-friendly workflow for GLAM uploads on both Wikidata and Commons. We created a workflow using Pattypan and OpenRefine, defined the missing steps needed, and started an easily translatable and updatable documentation -
Team: @Lena @Anne-LaureM @Ecritures @Ash_Crow @Mh-3110 @PierreSelim

Fixing code error in production / Collection extension / "PHP error: Undefined index: items"
Description: The Collection extension no longer emits this bug under normal operation.
Status: The code was patched. The patch was deployed to production.
Team: @Reedy, @Krinkle

Added unit testing suite, integrated VSCode debugger, a few minor bug fixes, ground work for performance improvements on node-libzim
Team: @Isnit001

Wiki Education Dashboard
After meeting with Czech Wikipedians who use Programs & Events Dashboard for classroom projects, we identified 14 bugs and feature requests, then we tackled many of the highest priority ones:
The 'Article Finder' tools for searching for high impact underdeveloped articles now has a language picker:
Campaign organizers can now make edits to any programs in their campaign
We added an API that can be used by a mediawiki gadget to display information about courses from Special:Contributions (gadget in progress by @Urbanecm)
We made it easier to copy previous programs for a new academic term
We made a bevy of UI improvements and bug fixes
We made the features for downloading detailed campaign metrics more discoverable
We upgraded several key dependencies, and improved build speed
Wikidata labels show up in more places
All of these improvements are live now on
Team: @Ragesoss, @wes_wiki_edu, @psinghal20, @takidelfin

Wikidata Query Service: UX improvement and bug fixes around creation of short urls
The short url was not possible to select or copy. This is now possible, and easy.
Team: @Smalyshev, @Krinkle
Improvement on search bar query service using ElasticSearch query. Explored tools to automate search test such as relevant forge. Relevant Forge looked actually to b
Reference issue that we work on
Team: @Jumtist, @Maxlath

Indonesian translation of
translate ~800 strings
Team: @Peb, @Maxlath

Updating external links to GLAM catalogues
There are wiki pages about art works (e.g. Wikipedia articles, File: pages for images on Commons) that do not link to the official catalogue entries for the work in its host museum. Perhaps the image has been obtained from an aggregator such as RKDimages. Sometimes the image or details has been retrieved from a site that the museum no longer maintains, and there is a new official link that should be added. I have been working on a way to query for external links using Wikidata Query Service. It does not quite work yet due to a bug in SPARQL access to the MediaWiki API (reported at the above phab link). Thanks to Lucas Werkmeister who has been extremely helpful.
Team: @MartinPoulter

Quickly add Wikidata statements based on url
Pass the property and target item you want to add to an item int the url so you can quickly add a statement.
Code at , example at
Team: @Multichill

Audio subtitles for video.js
For our next generation video player based on video.js, I created a plugin to show subtitles/captions to users for audio clips.
<URL of the demo, if available.>
Team: @TheDJ

Released cloudelastic
like mysql replicas in wmf cloud, except elasticsearch.
Team: @EBernhardson

Global Search Tool
Provides keyword and source regex queries against all wmf wikis in a single search request. Powered by cloudelastic.
Phabricator board:
Team: @MusikAnimal @EBernhardson

Import Hackspaces from to wikidata with a bot
Idea is to automatically add all Hackerspaces listed in the semantic MediaWiki on to wikidata
Sample not finished code using pywikibot for that is here:
Team: @Jampo001

New node.js 10 and python 3.5 runtimes for Kubernetes webservices in Toolforge
Team: @bd808

Improve Data namespace licenses and allow description with wikitext and>
Technical changes to clearly indicate data that is under this license. Add support for CC-BY-SA and a way for the community to add wiki markup to the top of the dataset pages. That markup will allow for messages, categories, deletion requests, etc.
Team: @MSantos

Scrape all the pictures of the digital image repository of the National Archive of Curaçao with the Copyright status 'negative'
Attempt to scrape the image URLs and metadata out of the National Archive of Curaçao's imagebank for images that have a copyright status of "Negatief". So far the metadata of all 1103 images with this status has been cleaned; the URLS are still underway. Once the CSV file is ready, Yupik will send it to Ecritures, who will batch upload the images to Commons.
Team: @Ecritures, @Yupik.

Split mediawiki tests into unit and integration tests
Rather than our current setup in which all tests are bundled in a single directory split the unit and integration tests into 2 separate directories. and
Team: @kostajh @Michael @TK-999 @Ladsgroup

Fix longstanding bug in Wikidata-Lua-Module in German Wikipedia
When using data from Wikidata in German Wikipedia duplicated sources were shown multiple times, they now get reused.
Team: @MichaelSchoenitzer with important help from <Sorry I forgot your name>.

Add copy button to URL shortener result
After shortening, you need to select and copy the link from Special:UrlShortener. It would be amazing if we had a button and when you clicked it, boom, magically it stores it in the clipboard.
Team: @Esanders

Normalize homoglyphs in mixed-script tokens
Oзон and Озон look the same, but the first one starts with a Latin O rather than a Cyrillic О. Searching for either will not find the other. Working on an Elasticsearch plugin to attempt to map homoglyphs in mixed-script tokens and index any single-script variants that can be generated. Didn't get very far, but had lots of interesting conversations about search (and juggling).
Team: @TJones

Still waters run deep
Wikipedia article contest results visualisation for insight/media purposes
Team: @tramm

Import wikidata query result as python panda in PAWS
Description: run sparql from jupyter notebook and get the result in pandas DataFrame, installed SPARQLWrapper and sparqldataframe
Status: example code
Team: @Peb

Rewrite Commons {{book}} template
Switch Commons {{book}} template to use Lua code already used by {{Artwork}} and {{Photograph}} templates
{{book}} template on commons is one of be large file infoboxes used for describing books. It was implemented as an old style template and did not make any use of Wikidata.
During hackathon I finished the rewrite of the template in Lua language using existing already used by couple of other infobox templates
Next steps will be monitoring for issues which might pop out. Also visiting many pages using the module to verify there are no issues
Team: @Jarekt

Wikidata Stream
A nice view of the Wikidata edit stream to use as an eye-catcher for people interested in Wikidata
Team: @Tobias1984

Wikidata autocomplete gadget for external-id properties
Description: Create a gadget that provides autocomplete for selected external-id properties (like VIAF) by querying external search APIs.
Status: A working gadget ( and backend service ( that currently only works with VIAF.
Next steps: Cleanup the backend service code and add tests for it, add support for more services, and then try to get the gadget approved.
Team: @Danmichaelo

Host Primerpedia on Toolforge
Description: Primerpedia is currently hosted via GitHub pages, at Hosting it on Toolforge would bring it closer to the Wikimedia universe and make it more discoverable.
Status: Migration to Toolforge done, complete with auto-update and everything! 🎉
Team: @waldyrious, with help from @Chicocvenancio and @zhuyifei1999.

Add multi-language support to Primerpedia
Description: Primerpedia was hardcoded to load articles from English Wikipedia. The goal of this task was to implement support for fetching content from other language Wikipedias.
Status: Done. Pull request submitted at, merged.
Demo/URL: Try the feature live at (enter a different language code in the text box)
Team: @witia1 (github username, not sure what's his Phabricator username), @waldyrious

Collect information on SPARQL query builders
Description: There are several visual SPARQL query builders around that aim to make it easier to compose SPARQL queries, but information is dispersed and in some cases outdated. It should be collected and made available to the community.
Status: Complete. All the currently working SPARQL query builders are now listed in a single place
Team: @waldyrious

Conduct usability experiments of SPARQL query builders
Description: The various existing tools for visual building of SPARQL queries incorporate different design decisions, with different usability consequences. User testing these tools will identify areas to improve me in these or future tools.
Status: Done. Usability testing was conducted in all the visual query building tools that were live or could be revived, and several potential improvement were identified.
Demo/URL: Results reported in
Team: @Charlie_WMDE, Jakob_WMDE, @waldyrious

Improve Toolforge documentation at wikitech.org
Description: The Toolforge documentation at is extensive but somewhat confusing, especially for beginners. It should be accessible, discoverable, easy to navigate and reasonably complete.
Status: Done. Updates were made to various help pages, including Portal:Toolforge, Help:Toolforge, Help:Getting Started, Help:Toolforge/Web, and others.
Demo/URL: The full list of edits can be seen at
Team: @waldyrious, with help from @Chicocvenancio, @bd808 and @Quiddity.

Research and document how to set up auto-updating of Toolforge tools from a GitHub repository
Description: GitHub is a popular code hosting and collaboration platform, including for many Toolforge tools. There should be easy-to-follow instructions about how set a Toolforge tool to be automatically in sync with its GitHub repo.
Status: Done. Various approaches were investigated and experimented with, and the simplest one was documented thoroughly, in a step-by-step guide.
Team: @waldyrious, with help from @Chicocvenancio, @zhuyifei1999 and @bd808.

Improve documentation for WMDE's Wikibase Docker images repository
Description: The README for is a bit sparse and confusing for a beginner, which hinders its usage. It should be improved so that newcomers can easily understand and follow it.
Status: Partially done. A first PR has been merged, and a second one is in progress.
Team: @waldyrious, with help from @alaa_wmde and @Tarrow.

Create a SPARQL Query for the book "Dolle Mythes" for a writing competition on the Dutch Wikipedia
I checked the 63 names from the names register of the book by hand for Wikidata items and added P1343 (described by source) and Q63888904 (Dolle Mythes) with Quickstatements. Then, I created (my first) sparql query and integrated at in Wikipedia with {{tl|Wikidata list}}
URL of the finished list
Team: @Ciell with help from @Andrawaag and @Harmonia_Amanda

Start converting Wikibase to extension registration
Description: Extension registration (i. e. extension.json instead of ExtensionName.php) has been the preferred way to use MediaWiki extensions for a while now, but Wikibase has resisted efforts to port it to this system several times. I managed to get started on it, following a process that was successful in the Translate extension.
Team: @Lucas_Werkmeister_WMDE (@Geertivp)

Event Timeline

Rfarrand created this task.

Note: The closing session as been largely formated with the format :
{Title} - {phabricator url}

  • Brief: {brief}
  • Achieved: {achieved}
  • Contacts: {contacts}
  • Demo: {demo}
  • Status: {status}
  • Video: {video urls}
  • Image: {image urls}
  • Links: {link urls}
  • Project: {wikidata|commons|all|etc.}

Speech by {username}: {speech}.


  • Hackathons: this format could be reuse in next hackathons. Improvements: encourage systematic use wiki usernames would be better for anonymity, rather than the current mix of name, usernames, twitter accounts.
  • Webpage and tracking: format could be converted via text processing (regex) into JSON, to create a more elegant html/css gallery.

@Yug can you perhaps summarize the changes you just made to the task description? Phabricator only shows it as one huge “remove and add everything”, and all I can tell is that my changes seem to have gotten lost in the process :/

This comment was removed by Yug.

I simply copy-pasted the content of the etherpad with some new content, into this issue. Sorry for your contents >__< Could you fish them back, add them to the etherpad and here ?

@Yug can you perhaps summarize the changes you just made to the task description? Phabricator only shows it as one huge “remove and add everything”, and all I can tell is that my changes seem to have gotten lost in the process :/


  • Hackathons: this format could be reuse in next hackathons.

Hi @Yug. We do already have a standard format as described at which links to the template that we start with (and that we tweak each year) at -- We try to keep it easily copyable for each new event. I've now improved it with the latest tweaks and ideas from this event's final showcase formatting. Thanks to all who tweaked it!

Improvements: encourage systematic use wiki usernames would be better for anonymity, rather than the current mix of name, usernames, twitter accounts.

All usernames should be phabricator usernames, as specified in the template

  • Webpage and tracking: format could be converted via text processing (regex) into JSON, to create a more elegant html/css gallery.

I agree it could be a good improvement to instead store these Showcase notes onwiki, and create a more elegant listing of the results perhaps with some screenshots. Perhaps you'd like to experiment with attempting that?

To create the initial version in the description here, I first copied it into a gdoc (which semi-preserves the formatting) and then copied that into the task description. It's hard to copy out of etherpads because of (A) bulletpoints and (B) linebreaks. Wiki/Phab/gdoc each recognize different parts of the rich-text that is copied-to-clipboard from each of the others. The only existing tool to assist with that is documented at

(Sorry for the late response. I was traveling.)

(p.s. Let's fork this discussion over to T219203: Wikimedia Hackathon 2019 volunteer group: Documentation instead, so as not to further distract the ~100 people subscribed here. :-)

Quiddity added a subscriber: Ciell.

my changes seem to have gotten lost in the process :/

fixed. (and others)

Sorry for the large update. Phabricator doesn't make the diff easy to read, but I changed mostly formatting for consistency and readability, and updated the introductory test to reflect the status of the Etherpad document. Here's a more readable diff: