Hello!
I’m a representative of the leading search engine in Russia, Yandex LLC. (http://www.yandex.ru ). Yandex is the most popular and traffic rich web property on the Russian Internet with more than 24m unique users weekly.
We think that the content of the site https://www.wikidata.org is very important and would be very useful for the users of our search engine system so we would like to index as many pages of it per as possible. Could you please tell us how many requests per second can our crawler make without being blocked? We would like to make at least 10 RPS and more if it is possible.
All of our bots use distinctif User-Agent's. You can see the list here:
https://yandex.ru/support/webmaster/robot-workings/check-yandex-robots.html?lang=en
If you tell us the conditions under which a crawler will not cause overloads while indexing sites on your servers, we will try to make the adjustments needed. Downloading the wikidata dumps might not help in this situation as we need to crawl pages a user sees them.