
Wikipedia editors are raising concerns after discovering factual errors and fabricated citations in articles translated with AI tools.
The issue surfaced in translations funded by the Open Knowledge Association, a nonprofit program that pays contributors to translate articles into different languages. Translators have reportedly been relying on large language models such as ChatGPT and Google Gemini to speed up the process.
According to reporting from 404 Media, editors reviewing the translated articles found multiple cases of so-called AI “hallucinations.” These included incorrect information that wasn’t present in the original article, swapped or missing citations, and references pointing to unrelated sources.
AI mistakes create editorial challenges
Large language models are known to sometimes generate plausible-sounding but inaccurate information. When used for translation without careful human review, those mistakes can slip into published content.
Earlier versions of the translation workflow reportedly used Grok, the AI model associated with Elon Musk’s X platform. The Open Knowledge Association has since changed its policy following the discovery of the errors.
Paid translations under scrutiny
Wikipedia contributors are generally unpaid volunteers, but the Open Knowledge Association offers stipends to translators working on multilingual articles. The program reportedly pays around $400 per month for full-time translation work, with many participants located in developing regions.
Editors worry that the financial incentive may encourage speed over accuracy, particularly when translators depend heavily on automated tools.
Stricter oversight for AI-assisted edits
Despite the issues, Wikipedia editors have not abandoned the program entirely. Translations remain crucial for expanding content beyond English, which is still the largest version of Wikipedia by a wide margin.
However, translators working with the program now face stricter oversight. Contributors who accumulate five documented errors may be banned, and their previous translations can be removed unless a senior editor takes responsibility for reviewing them.
The controversy highlights a growing challenge for collaborative knowledge platforms: balancing the efficiency of AI tools with the need for accuracy and editorial reliability.




