Sometimes the mt-engines turns up complete gibberish, or the editor himself looses track of the text. Would it be possible to detect this? Perhaps something like a statistical engine to verify that the text is somewhat similar to an existing language model, not to do statistical translation but statistical verification of the text.
After checking some translated text it seems like texts with gibberish is either left in the article because the editor gives up, or because he did not notice. A lot of the gibberish is within sections with a lot of messed up templates, so I guess that is an indication that the editor simply gives up. That could also be an indication that we need better tools to revisit failed translations, ie to make it easier to throw out stuff we don't know how to translate.