Page MenuHomePhabricator

Create a dataset of past GLAM-Wiki collaborations
Open, Needs TriagePublic

Description

Create a structured dataset of past documented GLAM-Wiki collaborations in the Wikimedia movement.

This data will help to understand the diversity as well as the needs of GLAM-Wiki collaborations. This data is currently spread across various sources such as the GLAM Newsletter, Grant reports, Affiliate reports, Meta-Wiki etc.

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald TranscriptNov 11 2019, 3:08 PM

Very interested in this task to assist tracking instances of Open GLAM data releases in Wiki Commons.

In the Open GLAM survey, we currently record such instances through a manual, word-of-mouth process. It would be fantastic if one could query Open GLAM data releases of media in Wiki Commons in a more structured and automated way.

Will follow with interest and am happy to discuss and assist!

SGill added a comment.Apr 1 2020, 8:44 AM

@Douglaskmccarthy Indeed, it would be really awesome if we are able to query this data.

Here is the page on Meta-Wiki about this research project:

https://meta.wikimedia.org/wiki/Research:GLAM-Wiki_Mapping

Hopefully, I will be sharing more happy news in the coming months.

Here's a Wikidata query of institutions that are described in the OpenGLAM Survey, with their Commons categories: https://w.wiki/LkJ

Please note that a Commons category does not necessarily mean that there's content from that institution uploaded to Wikimedia Commons; it may also just be a small category with photographs of their building taken by Wikimedians. And the data may not be complete, i.e. institutions without Commons category may actually have one, but it hasn't been added to Wikidata yet.

In order to be able to query for actual uploads from the institution, we'd need to dig deeper. Having that data as structured data + being able to retrieve it through a SPARQL endpoint for Wikimedia Commons would be super helpful. See T221921: Provision search endpoint for SDC. Requirements from Product Team.

@David_Haskiya_WMSE pinging you as this is related to what we discussed earlier today.

Salgo60 added a subscriber: Salgo60.EditedApr 3 2020, 5:19 AM

Linked data maturity is of interest In the datatset it would be nice to see the quality of the metadata like Europeana try to measure in the Metadata quality framework by Péter Király

My understanding is that the Europeana people has big problems with the metadata quality of delivered material so they have created a Metadata Quality Assurance Framework for Europeana

As we see in Sweden that all of the museum material uploaded to Europeana miss "same as" and "coordinates" it would be also of interest to see the metadata quality we get in GLAM-Wiki collaborations to understand what help those institution needs and also what added value the Wiki community adds regarding depicts in pictures/metadata/linked data...

If you check what is delivered from Sweden by SOCH in the Metadata Quality Assurance Framework for Europeana you can see that they deliver 1 130 565 objects but no coordinates or same as has been delivered

The consequences is that the Europeana enrichment adds error as they guess that every one called "Carl Larsson" is same as Wikidata Q187310 BUT a Swedish museum also have 1 million photos from a person named Carl Larsson same as Q5937128.

The sad thing is that the museum has good metadata but when exported to Europeana the people doing that converts Linked data to text ---> and we get a mess...

Example of the cost of bad metadata is that en:Wikipedia refused to have links to Europeana and deleted the template you can also follow the link for Carl Larsson to Europeana and see its a mix of Q187310 and Q5937128.

  • blogpost about this "Carl Larsson who is that - sadly Europeana doesnt know --> #Metadatadebt"

My guess is that when Europeana has so much problem asking one museum to say that this artist is the same as an artist at another museum this will never scale to also have entity management for what a picture depicts using linked data if not we start to communicate the quality of delivered metadata / entity management. The challenge is that sending text strings is easy and have entity management needs a total different skill sets and management that we dont see in most GLAM institutions today.

I have been working with international bank transactions and no one would think about sending something as text strings with no unique identifier and then start match bank accounts on the name "We give all money to one person called "Carl Larsson" for me this is an indication of a network lack of maturity when no one reacts, people dont know how to error report and no one gives us an action plan how and when it will be fixed.... most of the components are missing like: getting a helpdesk ticket and an easy way seeing the status....

#Linkeddata needs #linkedpeople if that parameter can be measured then we at least understand better what actions are needed...