The file robots.txt to http://static.wikipedia.org/ is incomplete...
...in what way? And what issues does this cause?
The domain http://static.wikipedia.org/ have lots of directories listeds in your
main page, but the robots.txt disallow bots to download data only to 9 of them.
Fixed it thanks for the report.