The machine readable description of Wikidata dumps in [DCAT-AP](http://www.wikidata.org/entity/Q28600460) is only provided as RDF/XML and poorly documented, limiting its usefulness. Please automatically import the RDF file into Wikidata Query Service so we can get a list of current dumps for instance with [This SPARQL query](https://query.wikidata.org/#PREFIX%20dcat%3A%20%3Chttp%3A%2F%2Fwww.w3.org%2Fns%2Fdcat%23%3E%0A%0ASELECT%20%3Furl%20%3Fdate%20%3Fsize%20WHERE%20%7B%0A%20%20%3Chttps%3A%2F%2Fwww.wikidata.org%2Fabout%23catalog%3E%20dcat%3Adataset%20%3Fdump%20.%0A%20%20%3Fdump%20dcat%3Adistribution%20%5B%0A%20%20%20%20dc%3Aformat%20%22application%2Fjson%22%20%3B%0A%20%20%20%20dcat%3AdownloadURL%20%3Furl%20%3B%0A%20%20%20%20dcat%3Aissued%20%3Fdate%20%3B%0A%20%20%20%20dcat%3AbyteSize%20%3Fbytes%20%0A%20%20%5D%20.%0A%7D):
~~~
PREFIX dcat: <http://www.w3.org/ns/dcat#>
PREFIX dct: <http://purl.org/dc/terms/>
SELECT ?url ?date ?size WHERE {
<https://www.wikidata.org/about#catalog> dcat:dataset ?dump .
?dump dcat:distribution [
dct:format "application/json" ;
dcat:downloadURL ?url ;
dcat:issued ?date ;
dcat:byteSize ?bytes
] .
}
~~~
The only open question is whether to keep information about dumps removed from <https://dumps.wikimedia.org/wikidatawiki/entities/>. I don't this so but DCAT information from other dump hosters such as Internet Archive (see their [list of Wikdata dumps](https://archive.org/details/wikimediadownloads?and%5B%5D="Wikidata+entity+dumps"&sort=-publicdate) should be included as well.