Old hand at Wiki[pm]edia . Information scientists. Working at German library network GBV.
User Details
- User Since
- Oct 1 2015, 6:56 PM (427 w, 1 d)
- Availability
- Available
- LDAP User
- JakobVoss
- MediaWiki User
- JakobVoss [ Global Accounts ]
Feb 9 2021
Aug 8 2020
Jun 13 2019
Changing accessTokenURL will not work with Wikimedia wikis, for instance the default baseURL is https://meta.wikimedia.org/ so w/index.php must be appended. Changing userAuthorizationURL neither will work as mentioned at https://www.mediawiki.org/wiki/OAuth/For_Developers#cite_note-4 because of bug https://phabricator.wikimedia.org/T74186.
Just use the still existing repository location and do a new release at npm. I created a pull request at https://github.com/milimetric/passport-mediawiki-oauth/pull/3.
Apr 7 2018
Mar 23 2018
This has partly been implemented in wikidata-integrator but full support of DOIs requires this to be implemented.
Adding via DOI (probably from crossref data) is another ticket: https://phabricator.wikimedia.org/T172043
Dec 22 2017
Seems to be a duplicate of https://phabricator.wikimedia.org/T107021
Nov 27 2017
This is either not solved or not installed. The latest dcat-ap file from November 22th is broken: https://dumps.wikimedia.org/wikidatawiki/entities/dcatap.rdf (contains xmlns:rdf="")
Nov 14 2017
Creating RDF/XML by hand is bad practice anyway, no matter in which programming language. The current script could be modified to use Purtle like other Wikimedia software, this would also solve the bug.
Nov 3 2017
Yes, being able to query the information from dcatap would increase its usability a lot because WDQS is integrated in the Wikidata tool ecosystem. Having to download, parse, and evaluate the RDF file on your own requires RDF technology. There is no statement that WDQS content and described dumps are from the same date so I don't understand the problem. I think dcatap could be added and updated as named graph. Maybe this is related to Wikistats, I would also welcome a dedicated SPARQL endpoint with information about dumps and statistics - this endpoint could be included into WDQS via federated queries but I don't want to open a can of worms if there is a simple solution.
Oct 25 2017
Seems to be fixed, at least in the current file.
Jun 13 2017
The Chopin example could be written like this to become a simple example:
May 29 2017
May 21 2017
The target page should be https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service - if this is no good starting point, the page must be improved instead of using another subpage.
Have a look at https://www.npmjs.com/package/wikidata-taxonomy for some previous work about creation of hierarchies from Wikidata. The tool includes some additional information such as number of instances and number of sitelinks and additional parents. A tree visualization could also show arbitrary additional information so it would be like the table visualization but with a special first row with the hierarchy. One must also make sure to handle multihierarchies.
May 20 2017
I wrote a short blog post about what I learned about Phabricator yesterday: http://jakoblog.de/2017/05/20/introduction-to-phabricator-at-wikimedia-hackathon/
May 19 2017
May 18 2017
Some of the pages could also be merged. For instance I regularly struggle to find anything on https://www.wikidata.org/wiki/Help:Properties or https://www.wikidata.org/wiki/Wikidata:Properties. It's confusing to have both Help:TOPIC and Wikidata:TOPIC anyway.
When and where are blog articles expected to be published? Sure I can post to my private blog (http://jakoblog.de) but I am also happy to focus on writing for another blog.
For shared writing of blog posts I recommed http://hackmd.io/ - it's similar to etherpad but Markdown syntax. You should be able to copy the resulting HTML into your blogging software and it can be converted to MediaWiki syntax with pandoc. Anyway collaborative real-time writing is helpful, I'm also fine with Google Docs or Etherpad. We could share links to the editable documents here, can't we?
Apr 5 2017
Apr 4 2017
Apr 3 2017
Mar 20 2017
Jan 2 2017
Dec 29 2016
Thanks, I ended up using 2.0rc5 and adding SPARQL capabilities to my own code. For my use case this issue can be closed but a new release of pywikibot is required anyway.
Dec 21 2016
Dec 11 2016
Sure the sources work but I want to release a tool based on pywikibot. My current requirements.txt contains pywikibot but the latest version is 2.0rc5 without SPARQL features. I don't mind which version number but new features must be releases to be usable. Having unreleased features in a developer branch makes sense but these features are deployed at PAWS and documented (?). Without releases it is hard to tell whether and how some feature are available or not.
Sep 29 2016
Jun 2 2016
May 27 2016
Oct 26 2015
At least the documentation lacks a clear description of this complex language negotiation mechanism (uselang is not mentioned. Meaning of 'strictlanguage' unclear, examples only refer to English).
Oct 24 2015
I found an example where even uselang and language does not help but part of the response is always English:
Oct 22 2015
I also stumbled upon this weird behaviour. The Accept-language HTTP header is also ignored but response language for some fields comes from a cookie (?!). Luckily parameter uselang can be used to control the response language but this should definitely be included in API documentation.
Oct 12 2015
This does not block me from using the service, as my client can produce valid SPARQL TSV output from SPARQL JSON output. It's a nasty violation of of SPARQL specification.
Oct 8 2015
This is kind of a subtask of https://phabricator.wikimedia.org/T114741
Oct 7 2015
The export should be available via request parameter format=csv or format=tsv.