https://auth.wikimedia.org/robots.txt currently says "Invalid request URI (requestUri=/robots.txt), can't determine language."
We may want a robots.txt that disables any bots from scrapping any pages in this website, since login links in all wikis will soon be redirected to the new auth domain.