The AI stories we tell – and the ones we don't
Sign up for Journalism UK
A one-minute read, every weekday. Unsubscribe anytime.
No spam. Unsubscribe anytime.
Only six per cent of people use AI weekly for news, according to Reuters Institute research — but that figure has doubled in just a year.
As generative AI tools become more embedded in daily life, the narratives journalists create about artificial intelligence — and the blind spots they leave — are profoundly shaping public understanding.
At the Reuters Institute’s AI and the Future of News 2026 event today (17 March 2026), three leading experts explored how the media is framing the AI discourse, what’s missing from the conversation, and how journalism can rise to the challenge.
The top line? Despite surging public awareness and use of generative AI, most people remain wary of its role in news, and critical gaps persist in how the industry explains, investigates, and holds AI power to account.
The cost of imprecision: why vague reporting on AI persists
Joanna S. Kao, senior editor for information and artificial intelligence at the Pulitzer Center, highlighted a persistent weakness in AI coverage: the tendency to use "AI" as a catch-all term, often without clarifying what specific technology is being discussed or how it actually works.
The lack of detail, she argued, makes it difficult for audiences to understand whether the right kind of AI is being applied to a given problem, and who ultimately benefits from its deployment.
The imprecision is not accidental, either. The mystique and jargon surrounding AI often serve the interests of powerful tech companies, who benefit when journalists and the public are kept at arm's length from the details.
She cited the Pulitzer Center’s AI Spotlight Series, which brings together journalists and independent researchers to investigate issues such as gig worker compensation, environmental impacts of data centres, and the evolution of model bias over time.
"Having a community of people to talk with, sharing methodologies, working collaboratively with different disciplines — these are crucial," she says, stressing the importance of following up on company claims, such as water usage projections for data centres, and of speaking directly to the communities affected by AI rollouts, rather than relying solely on company spokespeople or press releases.
The warning is stark: if journalists fail to demystify AI and scrutinise its real-world impacts, they risk ceding the narrative to those with the most to gain from public confusion.
Lessons from climate: the need for evidence and transparency
Akshat Rathi, senior climate reporter at Bloomberg, drew a direct parallel between the challenges of reporting on AI and climate: both fields are marked by rapid technological change, global stakes, and a lack of robust international oversight.
"We are missing a framework to handle a global technology that is coming to the fore,” he said, noting that while climate science took decades to build consensus and governance structures, AI is advancing at a much faster pace without similar safeguards.
Rathi was particularly critical of the hype surrounding AI as a climate solution. He referenced peer-reviewed studies that produced pretty different estimates of AI’s carbon footprint — one suggesting that AI companies could generate 80 million tons of CO2 emissions in a single year (comparable to Belgium’s annual output). Another put the figure at just 44 million tons over the next six years in the US alone (albeit where the biggest data centres exist), putting it well adrift of net-zero 2030 goals.
"These are the very companies where the targets that they have set, they're not meeting, and if anything, they're going in the opposite direction," he says. "And so they are trying to control the narrative by not disclosing any more information."
He called for newsrooms to treat AI as a cross-beat issue, much as climate desks now routinely collaborate with business, technology, and policy reporters. At Bloomberg, for example, the climate team acts as an internal consultancy, helping other desks interrogate claims about emissions, supply chains, and the true environmental costs of new technologies.
"It’s a real accountability opportunity for newsrooms to get these tech companies to actually disclose the level of information that would give us an insight into the real harm from not just a carbon perspective, but also water," he adds. "Both of which we have really poor understanding on."
The implication: unless journalists demand evidence and resist the temptation to echo industry optimism, they risk repeating the mistakes of early climate coverage, where vested interests shaped the narrative for years to come.
Following the money: the challenge of holding big tech to account
Niamh McIntyre, senior reporter on The Bureau of Investigative Journalism's big tech team, identified structural barriers that make it difficult for journalists to hold private AI companies accountable.
She noted that many of the most influential firms are not publicly traded and therefore face few disclosure requirements: "A lot of the companies making this technology are private, so they don’t even have to do the basic reporting that public companies have to do," she explained. The result is a reporting environment where NDAs, legal threats, and a culture of secrecy make it hard for journalists to develop relationships or verify claims. Source development is therefore crucial.
McIntyre described the painstaking process of building trust with lower-paid workers — data labellers, moderators, and gig workers — who are often the only people willing to speak candidly about how AI systems are built and deployed. It's where she's found many breakthroughs.
She cited TBIJ’s investigation into Appen, a company that supplies linguistic data and labour to the US military, as an example of how these hidden stories can reveal the true social and ethical costs of AI. She also referenced the recent Anthropic-Pentagon controversy, where the major AI company scored positive PR for setting red lines against fully autonomous weapons and mass surveillance, while quietly allowing its models to be used in overseas US military operations.
These types of accountability investigations are long, expensive and ultimately not guaranteed to succeed. Underfunded newsrooms, conversely, can be tempted to rely on press releases and industry narratives.
As a former Pulitzer Center fellow, she called for greater philanthropic support and international collaboration, especially as AI technologies increasingly cross borders: "Some of the most fruitful work I’ve done has been as a result of those international collaborations."
This article was drafted with the help of an AI assistant and lots of human prompting, before it was edited by another human
Share with a colleague
Source Code: the spirit of Hacks/Hackers LDN is back with a new name
Lessons from Independent Studio: standing out in podcasts' shift to video
Why graduate entry into online publishing has never been harder
