At this point, the only way to rank various Wikidata results is to order them by sitelink-count. This offers a fairly good indicator of how many different languages/cultures are interested in a topic, but is not very accurate, especially when a topic is mostly related to a single language
I propose we introduce a new type of entries to WDQS:
```lang=sparql
# Naming is TBD
<https://en.wikipedia.org/wiki/Albert_Einstein> prefix:total_page_views [integer] .
<https://en.wikipedia.org/wiki/Albert_Einstein> prefix:last_24h_page_views [integer] .
```
Some script would download files from [[ https://dumps.wikimedia.org/other/pageviews | dumps ]], and increment the counters once an hour. The updates should happen [[ https://stackoverflow.com/questions/46030514/update-or-create-numeric-counters-in-sparql-upsert/46042692#46042692 | in bulk ]]. Each file is about 5 million entries (<40MB gz).
Additionally, we may want to keep the running total for the last 24 hours - a bit trickier, but also doable - e.g. by keeping the totals of the last 24 files in memory, and uploading the deltas every hour. On restart, the service would re-download the last 24 files, delete all existing 24h totals, and re-upload them.
P.S. I am hacking on it at the moment (python). Need naming suggestions for the predicate.