Sat, Oct 24
I am not a developer, but I have a hunch that it definitely needs very strict code review, and thorough investigation whether embracing external software will benefit us in the long run. Is it indeed open source, and in that regard, is it totally compatible with our own codebase? Is it hack and tamper-proof and will it not be a magnet for various problems (important, considering how visible Wikimedia projects are and how much vandalism and other malicious attacks we are already subject to)? What kind of community is behind it, and is it reliable and in line with Wikimedia's values? Will it be around, supported and further developed for a good amount of time (I'd say 10 years or more) so that we don't end up with something we need to maintain ourselves with our own very limited resources?
FYI, in response to my hackathon project I was approached on Twitter by developers of the Modelviewer software, who suggested it as a tool to make integration of textured 3D models in currently widely supported file formats very easy for us.
I created this task because the developers of Modelviewer pointed it out to me in this Twitter thread as a vehicle for allowing us to implement T246901: https://twitter.com/modelviewer/status/1319785320246775808?s=20
I have just participated in the Hack4OpenGLAM Hackathon, part of the Creative Commons Global Summit 2020. One of the datasets available was a set of 1,000+ CC0-licensed 3D models of cultural heritage available on Sketchfab.
Jul 28 2020
Jul 17 2020
Jul 7 2020
Jun 29 2020
Jun 20 2020
Jun 4 2020
May 18 2020
I would say that this is still a very relevant ticket and request. The distinction between artwork and file is coming up in conversations with GLAM staff all the time; the importance of the distinction is stressed by experts in the GLAM sector. We want the Commons community to be encouraged to model this correctly. And in order for Wikimedia Commons to be/stay an attractive platform for GLAMs around the world, files depicting creative works should be as discoverable as any other file on Commons.
References for statements are important for the use case of GLAM media file imports: references will be used to indicate that certain statements are sourced from the cultural institution's website, to distinguish them from community-created statements. See also comment by @Dominicbm in the same Facebook thread as mentioned above.
As discussed in a thread in the Wikidata+GLAM Facebook group, a use case for references on Wikimedia Commons is: to indicate that specific edits have been done (or supported) by machine learning tools.
Wow, this is an old one but it is still relevant. Pinging @David_Haskiya_WMSE
May 10 2020
@Multichill and I have started brainstorming this among the both of us, and we discussed (among other things) the following things to take into account:
May 4 2020
Apr 30 2020
Apr 10 2020
Apr 4 2020
Mar 16 2020
The GLAMs that I interact with also ask very impatiently for a SPARQL endpoint, as a means for themselves and for supporting Wikimedians to check and maintain their own collections on Commons. WDQS is a crucial component of refined batch maintenance and editing tools as well (think PetScan, TABernacle), and it powers structured data-driven lists and galleries, which have been quite important for several SDC GLAM pilots.
Mar 12 2020
Multichill has good points above - I think I recall that there are some solid practical reasons why our 3D feature does not support textures yet.
Jan 8 2020
Dec 4 2019
Nov 25 2019
Nov 23 2019
I want this too. Actually a deceptively simple thing that would be quite useful on Wikimedia Commons as well.
My query for this workshop: location of collections that have works by James Ensor (whose work will be public domain on January 1, 2020) https://w.wiki/CYu
Nov 22 2019
I am happy
And thanks to the Minefield tool, I have successfully added structured data to the files in that category now! \o/
This task is done! The images are beautiful, and they are here: https://commons.wikimedia.org/wiki/Category:Seikei_Zusetsu
We're at the Wiki-Techstorm-2019 and @Husky is building a tool, aptly named Minefield, to convert filenames to M item numbers, see T238908: Minefield: A tool to convert Commons page title to media ID's. For now this can provide help in formatting the right commands for QuickStatements.
@Husky is building a tool, aptly named Minefield, to convert filenames to M item numbers, see T238908: Minefield: A tool to convert Commons page title to media ID's, which will provide a missing link to be able to feed the necessary edits into QuickStatements.
I tried to enter two versions of lists of files in the tool, but the purple Comic Sans just kept dancing in front of me...
Another option is using PetScan. Easy for end users would be to feed the tool a PetScan ID that outputs Commons files, example https://petscan.wmflabs.org/?psid=13840179
There are various ways to get lists of filenames, and each of them will probably produce different results.
The comment below is a good one to take into account in the context of technical scoping/prioritization in the WMSE-Tools-for-Partnerships-2019-Blueprinting project:
In order to be able to do this, we first need T222291: Add support for ISA in translatewiki.net (include ISA in TranslateWiki). There's more interest in translating the ISA interface to other languages as well.
Can my colleagues from the StructuredDataOnCommons team do anything to make it easier to include filename-to-Mid conversion in QuickStatements? It is actually a blocker to do proper Commons-related batch edits like for instance outlined in T238443: Add P180 (Depicts) and P6243 (Digital representation of) structured data to Commons files representing artworks by Jakob Smits and I'll be happy to help give things a push if needed.
Can perhaps anyone help with this in the context of the Wiki-Techstorm-2019 ? I know that we have some people here today and tomorrow, interested in translating the ISA interface in other languages.
Nov 18 2019
Nov 16 2019
Not correct to assign this to me, as I actually want to ask around at the Wiki-Techstorm-2019 whether someone else can solve this (it may require coding).
- For what instances of Wikidata items does this work? Only for paintings, or also for other two-dimensional works (I'd be happiest if it were the latter, hehe)?
- What are the other conditions to trigger the upload? Creator has a death date before a certain date + the work has a creation date before a certain date?
- URL pointing to the image needs to point to the exact image location, not the webpage
- How frequently does the bot run, i.e. how long should people be expected to wait before the upload has happened?
- What should folks do when they notice the upload has not happened after ... days?
- Other conditions and points of attention to mention?
Nov 15 2019
Ideally, I'd like to find a workflow for this that is achievable by a 'muggle' (someone who is not a coder/developer) like myself (i.e. it's probably very easy to achieve with Pywikibot but I'd like to do it with a tool).
Would be extra nice if this template uses structured data! See also T238415: Add structured data on Commons to newly uploaded files during the Tech Storm
Nov 12 2019
Actually, the Wikidata Art Depiction Explorer (WADE) does this. Yay!
Nov 5 2019
Apr 5 2019
My enthusiasm will even be larger if the result will be reasonably usable by people without coding skills 😀
Oct 13 2018
I would just do this with one of the existing great data import tools for Wikidata! No need to write a script in my opinion? Unless learning to write such a script is a goal in itself...?