https://lists.wikimedia.org/robots.txt
# robots.txt for lists.wikimedia.org # # Disabled crawling for several lists 2005-11-26 to # discourage people from complaining about items they # post on public mailing lists being the first Google # search result about them. # # Note that list archives remain public. # User-agent: * Disallow: /pipermail/
This is very silly. Ten years later, we should come up with a sensible process/policy rather than surrender our goals to complaint-trolls.
As someone who contributed about 2900 messages to lists.wikimedia.org (about 30 % of my mailing list history), I realise that this robots.txt policy may be a gentle way to tell so-called power posters that they're wasting their time and their graphomania will not leave traces in history. However, as a free knowledge advocate I'm not comfortable with hundreds of thousands knowledge base items locked into a domain which actively discourages discoverability.
While our mailing lists get increasingly inservible, people move their contributions to places where they feel more visible and useful in the long-term, like scattered blogs or Q&A websites or even Facebook. It would be easy to make our mailing lists a better publishing platform than Facebook and most random blogs, if only they were not isolated from the World Wide Web. But perhaps we actually *want* them to be isolated?
(Notified owners of all the 175 known public mailing lists. But forgot the link, *facepalm*.)