Skip to main content

Artificial intelligence assistants are frequently misreporting or distorting news, according to a joint study by the European Broadcasting Union (EBU) and the BBC. The research analyzed 3,000 responses from major AI platforms—ChatGPT, Copilot, Gemini, and Perplexity—and found that 45% contained major inaccuracies, while 81% had some form of factual or sourcing issue.

The study identified serious attribution problems in about one-third of all answers, including incorrect or missing citations. Gemini performed worst, with 72% of its responses showing sourcing errors, compared to under 25% for the other systems. Outdated information also plagued many responses, with examples such as Gemini misreporting legislation and ChatGPT mistakenly naming Pope Francis as still alive after his death.

Conducted across 18 countries and 14 languages, the study underscores growing concerns about how AI assistants are replacing traditional search engines for news consumption. The Reuters Institute reports that 7% of all online news users—and 15% of people under 25—now rely on AI tools for news updates.

EBU Media Director Jean Philip De Tender warned the trend could erode trust in public information: “When people don’t know what to trust, they end up trusting nothing at all.” The report urges AI developers to improve their models’ ability to distinguish fact from opinion and ensure transparent sourcing, warning that the issue could undermine democratic participation if left unaddressed.