User Details
- User Since
- Apr 12 2017, 2:17 AM (372 w, 6 d)
- Availability
- Available
- LDAP User
- Unknown
- MediaWiki User
- Andrawaag [ Global Accounts ]
May 23 2023
My impression from speaking with multiple people at the hackathon is that the authentication requirement is a barrier for people to even use the service in the first place, which defeats the point.
Jan 17 2023
As far as I know, such a tool is still difficult to develop. There is an API to read EntitySchema but it is still not possible to add and EntitySchema to a wikibase, besides adding them manually and pushing the save button. See related ticket: https://phabricator.wikimedia.org/T301336
Dec 6 2022
Jun 29 2022
Apr 25 2022
Perfect, thanks @Lucas_Werkmeister_WMDE. I agree it is a feature.
Apr 20 2022
Can the password feature on the SDCQC please, please, please, please pretty please be removed/disabled? The SDCQC is an epic feature, but almost useless thanks to the requirement to log in. Basically, Commons remains a data silo on its own.
I keep running into issues where I am building a query that I want to share, reuse in a jupyter notebook or run a federated query from Wikidata. The decision to Oauth here is really a poor design choice.
Apr 19 2022
Apr 5 2022
Mar 16 2022
Feb 28 2022
adding security to WCQS, might have an unexpected effect. Since it is not possible to write a federated query where the query is submitted to a remote SPARQL endpoint, it is only possible to run federated queries directly on the WCQS, which means that WCQS needs to deal with all the complexity of a query. Removing that login requirement would allow the majority of the complexity can be dealt with at a remote endpoint.
Feb 25 2022
Feb 17 2022
Feb 7 2022
@Daniel_Mietchen This one was indeed already done. However by its subdomain on amazon. I am still trying to find out how to deal with this in javascript. So far it seems impossible to catch redirects.
Feb 6 2022
Feb 3 2022
Can both be allowlisted? I am asking because your observation is accurate, the URL images.collections.yale.edu are indeed forwarded to prd-cds2-image-store-ypm.s3.amazonaws.com. The issue I am facing is that in the source the image.collections.yale.edu is used, while it is forwarded to an amazon subdomain. So far, to me, it seems impossible to capture the redirect header information in javascript. If it is not possible, allowlisting the amazon subdomain would already be helpful. I can make a separate script that resolves all the primary URLs while I try to find a solution to fetch redirect headers in the upload workflow.
Jan 28 2022
Jan 26 2022
Jan 22 2022
Jan 21 2022
One example is: https://www.gbif.org/occurrence/1988351533, which lists the metadata of a specimen as https://www.nhm.ac.uk/services/media-store/asset/d30de0ee09e45ad716dd8b5ba766a49c16d99fc2/contents/preview
Jan 19 2022
Jan 2 2022
Dec 10 2021
Jul 15 2021
Jun 29 2021
You are completely right, the same hashes are not needed to apply EntitySchema's on memory ingestion to Wikidata. I need the hashes as a sanity check that my script created the exact same RDF as being produced by Wikidata natively. So the hashes are only needed in the development phase of the script.
I wasn't looking for guarantees about the hash values. They have a value a sanity check in a[[ https://github.com/Wikidata/triplify-json | reverse engineering project ]] we are doing to reproduce the Wikidata/Wikibase RDF outside wikibase itself. We need this to be able to apply EntitySchema's pre-ingestion. Currently, EntitySchema's can only be applied post data ingestion. The script, as it currently works, builds the RDF from the JSON. That JSON object is enriched and the idea is to then verify if the new JSON object still fits the EntitySchema before it is submitted to the API of Wikidata.
Jun 19 2021
I would not call it evicting scholarly articles. Scholarly articles are currently a major driving force for Wikidata, however, its size is problematic because it is becoming more difficult to see other topics (sometimes unrelated to scholarly articles). I have thought about and working towards a federated landscape of linked wikibases and other semantic web resources for a while now. Building such a federated landscape is already easy peasy. We have wbstack, wikibase docker, but also platforms like GraphDB, Virtuoso, Stardog (to just mention a few). It would take a simple hackathon and some motivated users to build a nice prototype.
May 30 2021
Apr 3 2021
I would love to see paws move to python 3.7+. The Wikidata integrator now is python 3.7+, so can't be used on the current PAWS.
Mar 17 2021
In the ShEx CG, the following fix was suggested:
Mar 15 2021
I have reproduced the issue by running a Wikibase in both the Japanese and Korean language versions as configured using https://github.com/andrawaag/wikibase_languages
Jan 3 2021
Yes, batch parsing of EntitySchema's is still difficult. There are however some tricks one could use to not having to parse the HTML. There is the option under SpecialPages: EntitySchemaText. You'll need to add an EntitySchema number to get the EntitySchema in ShExC (e.g. E42). Subsequently json renderings of those ShExC can be obtained with parsers like shex-to-json can help in getting json of the schemas.
Nov 23 2020
Various solutions exist now to semi-automatically extract a schema from WIkidata. Tools like sheXer or Shape Designer. In this notebook sheXer is used to extract the schema from a set of external identifiers.
Nov 12 2020
Jul 6 2020
Lead to an online video: https://www.youtube.com/watch?v=NKfOY4U_QRc
May 10 2020
May 9 2020
Due to some error we were not able to record the first tutorial. I am happy to schedule another tutorial on how to write an schema for the EntitySchema, if there is interest.
We are meeting at time 1 at https://streamyard.com/nnan2qvfaw
How about time one: (https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=5&day=9&hour=15&min=0&sec=0&p1=48&p2=3759&p3=233&p4=770&p5=192) or time 2: https://www.timeanddate.com/worldclock/meetingdetails.html?year=2020&month=5&day=9&hour=16&min=0&sec=0&p1=48&p2=3759&p3=233&p4=770&p5=192
May 8 2020
Mar 16 2020
Mar 9 2020
Feb 17 2020
Yes I remember the 1000 suggestion, but we can certainly try it perpetually, but somehow that does not feel right. Every unsuccesful attempt is yet another attempt bothering the api. Would it not be better to simply stop until the api settle down after 25 efforts?
Feb 13 2020
Jan 22 2020
Jan 4 2020
This isssue will be reviewed in the ShEx CG meeting on January 8th: https://github.com/shexSpec/shex/blob/master/meetings/2020/20200108-agenda.md
This isssue will be reviewed in the ShEx CG meeting on January 8th: https://github.com/shexSpec/shex/blob/master/meetings/2020/20200108-agenda.md
Nov 27 2019
Nov 22 2019
Does these articles have DOIs or PMIDs?
Could this be looped through https://www.wikidata.org/wiki/Wikidata:WikiProject_iNaturalist?
Nov 20 2019
I think it makes sense to close this. Issue. As @Addshore suggests, with the extension being available to any wikibase. it is done.
Oct 4 2019
Sep 28 2019
This is enable through
changing
localhost:8181/wiki/Qx -> localhost:8181/entity/Q1.ttl
Sep 25 2019
Sep 11 2019
Aug 30 2019
Yes! Solved, thank you
Aug 22 2019
The issue resurfaced. To reproduce the issue the following steps needs to be followed: