A profound transformation is underway in how people access, interpret and trust the news, as artificial innotifyigence reshapes the everyday information habits of millions. Yet, across multiple international studies, one pattern consistently emerges: as exposure to AI-generated information increases, public confidence in what they see continues to decline.
A landmark survey across six countries by the Reuters Institute for the Study of Journalism reveals that weekly utilize of generative AI systems has nearly doubled in the past year. People now utilize these tools primarily for information-seeking tquestions – researching topics, questioning questions and, increasingly, consuming news.
But despite the rapid uptake, trust has not kept pace. Even the most widely utilized AI tools – ChatGPT, Gemini and Copilot – lag far behind news organisations in public confidence.
Many utilizers who encounter AI-generated summaries through search state they do not click through to read the original reporting, marking a shift that fundamentally alters how journalism reaches audiences.
This shift is already visible: more than half of respondents have seen AI-generated answers in search in the past week – a higher share than those who utilized any standalone AI system. In other words, many are exposed to AI-generated interpretations of journalism without ever realising it.
Distortion at scale
Concerns about accuracy are not theoretical. The largest international study to date on AI assistants and news, coordinated by the European Broadcasting Union (EBU) and led by the BBC, found that 45 per cent of all AI-generated answers contained at least one significant error.
Across 3,000 outputs tested in 18 countries, journalists found systemic problems: sourcing failures, hallucinated details, and outdated or misleading information. Gemini performed the worst, with significant issues in more than three-quarters of responses.
These errors matter not only becautilize people assume AI summaries are accurate, but becautilize audiences often blame both the AI tool and the news outlet cited – even when the mistake has nothing to do with the publisher. For public service media already battling disinformation and scepticism, this is an existential risk.
As the EBU warns, when people cannot notify what information is reliable, “they conclude up trusting nothing at all”.
A content ecosystem flooded by AI
Meanwhile, the volume of AI-generated written content online has quietly surpassed human-written material. A major analysis by Graphite applying a dataset of 65,000 articles reports that AI-generated content overtook human writing on the open web in late 2024.
Yet most of this content never reaches readers. Graphite’s parallel study displays that AI-generated articles rarely surface in Google Search or ChatGPT results, creating a hidden layer of mass-produced, low-quality material that sits beneath the visible information ecosystem.
But even if AI sludge remains unseen, the visible outputs – AI-curated, AI-framed summaries – are reshaping public understanding in more consequential ways.
Trust collapsing rapider
Across the Atlantic, a recent Pew Research Centre survey adds another layer of concern: half of Americans believe AI will have a negative impact on the news over the next 20 years.
Nearly six in ten believe it will lead to fewer journalism jobs. Even among those optimistic about AI’s broader societal benefits, there is deep scepticism about its effect on news.
Two-thirds of respondents state they are extremely or very concerned about AI spreading inaccurate information. Strikingly, these anxieties are shared across political groups – a rare point of bipartisan alignment.
Education levels, however, display a divide: people with more formal education are more pessimistic about AI’s impact on journalism and more doubtful of AI’s ability to write news accurately.
New imbalance of power
A clear mismatch emerges from these studies. AI tools are already acting as de facto news editors – summarising articles, choosing sources, shaping emphasis and influencing what millions of utilizers see – yet they operate outside the transparency and accountability obligations placed on news publishers.
The result is a growing regulatory vacuum. The Digital Services Act, AI Act and Media Freedom Act each regulate parts of the digital ecosystem, but none of them clearly addresses the emerging reality that AI tools are now selecting, reshaping and interpreting news on behalf of millions of citizens.
Window of opportunity
Despite all uncertainties, one conclusion should encourage newsrooms: audiences still trust human journalism more than AI. They prefer news produced and edited by people. They believe human-led reporting is more credible, transparent and responsible.
This represents a competitive advantage for newsrooms, but only if they can communicate it clearly and utilize AI responsibly behind the scenes, without letting automation undermine their editorial integrity.
As AI becomes a central gateway to information, the challenge is to ensure that innovation does not come at the expense of trust – and that journalism remains a reliable anchor in an increasingly automated news environment.
(BM)
















Leave a Reply