Feature summary (what you would like to be able to do and where):
I'd like to be able to download the data (Qs/Ps) in a wikibase as a dataset. I would like this feature to be supported as part of Wikibase Cloud. Possible need for history, discussion, etc, but I'm focused on getting the statements/triples out in some kind of plain text form (rdf, ttl, yml, etc).
Use case(s) (list the steps that you performed to discover that problem, and describe the actual underlying problem which you want to solve. Do not describe only a solution):
I work in cultural heritage. I work on the principle that interactive sites we build as part of projects, like a wikibase, will at some point cease to be updated as projects lose momentum, team members, etc, and then at some point go offline. However, the data remains of value *after* the interactive website is gone. So my model of preservation involves creating dumps of the data produced and depositing those somewhere for reuse at a later date by somebody who want to use that data.
Benefits (why should this be implemented?):
There are ineffecient/computational solutions to this problem - scraping a site with wget, using dumpgenerator https://github.com/WikiTeam/wikiteam/issues/395 - but they are unsupported by Wikibase Cloud. This feature would encourage Wikibase Cloud creators to carefully consider the preservation of the data they produce, and given them a supported tool for exporting their data at the end of a project (which is often a requirement of a research project).