//As a Wikisource user, I want the team to see if we can prevent web crawlers from downloading books, so the ebook exports can only be done by real users (and, therefore, the queue will be smaller & more efficient).//
**Background: **This is a follow-up of T256018. As we have discussed, if we keep the download links in the sidebar, we will still have web crawlers. However, if we add a download button at the top right of the book, it will not have web crawler access. This brings up a question: Can we replace all current download links to the new system, so we can prevent automated downloads and therefore increase reliability?
**Acceptance Criteria:**
* Investigate if we can prevent automated downloads via bots & webcrawlers
* Investigate how we can prevent automated downloads via bots & webcrawlers
* Investigate the main challenges, risks, and dependencies associated with such work
* Provide a general estimate/idea, if possible, of the potential impact it may have on ebook export reliability
* Provide a general estimation/rough sense of the level of difficulty of effort required in doing such work
* Share findings with the team