menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

How Reach and Immediate are rising to the AI disinformation challenge

18 0
24.03.2026

Sign up for Journalism UK

A one-minute read, every weekday. Unsubscribe anytime.

No spam. Unsubscribe anytime.

Artificial intelligence is rapidly transforming the information landscape, but as its capabilities grow, so do the risks for journalists and the public.

At a recent online webinar organised by UK independent press regulator IPSO, leading editors and strategists explored how AI is being weaponised for disinformation, how newsrooms are being deceived, and what practical steps organisations are taking to protect trust and integrity.

The new arms race: AI and the scale of disinformation

Michael McManus is the director of research at the Nation-states, a UK-based think tank specialising in foreign affairs. He opened with a stark warning: AI is now a core tool for bad actors.

Nation-states, extremist groups, and ideologically motivated individuals are all polluting the information ecosystem with newfound speed and sophistication.

"The leaps are geometric," McManus notes. AI enables the mass production of convincing fake narratives, hyper-personalised phishing attacks, and deepfakes that blur the line between reality and fabrication.

He pointed to recent research on Russian operations, where AI-driven social engineering scams target high-level officials with tailored, credible-sounding messages.

The challenge, McManus argued, is that as deepfake technology improves, journalists will soon face an "event horizon" – a critical point of no return – where it becomes nearly impossible to distinguish real from fake content.

"Disinformation is when somebody deliberately puts something in the ecosystem which they know to be untrue, but misinformation is when people in good faith share it believing it to be true," he said. The risk is that, without robust verification and industry-wide training, even experienced journalists could be misled.

McManus called for a new standard: "verify, then trust." He cited initiatives like BBC Verify and Finland’s school curriculum on deepfake detection as models for building "herd immunity" among journalists and the public. Ultimately, he argued, AI tools are only as good as their programmers, and human oversight remains essential to guard against both technical and corporate biases.

Deception in the PR sector: AI-generated stories and the erosion of trust

Our colleagues at Press Gazette have done extensive work through their Reality Wars series investigating a major growing concern facing the news sector: ostensibly legitimate and expert news sources, which turn out to be AI-generated people and quotes.

There have been high-profile gaffes. Publications like The Telegraph, Wired and Business Insider have been hoodwinked by fake freelancers. PR tools and agencies are becoming more common tactics to deceive journalists and editors.

Editor-in-chief Dominic Ponsford described how some PR agencies have "weaponised and industrialised" the process of securing SEO mentions for brands, exploiting journalists' traditional trust in press releases, especially for softer lifestyle stories. The result: thousands of articles in the mainstream media featuring fabricated sources.

Ponsford urged journalists to adopt a sceptical stance: treat unsolicited emails as AI-generated until proven otherwise, verify sources through harder-to-forge LinkedIn or direct calls, and use AI-detection tools like Pangram and Identify-AI.

"Don't take them as an expert just because they've appeared in lots of other publications, because we found [fake] people that have appeared in maybe 50 different places," he cautions. "They've been just as fake in every other article as the next one that gets written."

Reach’s response: Tools, training, and a culture of vigilance

Gary Rogers, newsroom transformation director at Reach, acknowledged the scale of the problem and detailed how his organisation is responding, as one of the organisations that has been caught out.

"We took [the Press Gazette investigation] very seriously and removed affected stories," Rogers said, emphasising that trust between PR and journalism is at risk if made-up stories are published.

Reach has tightened its vetting of PR agencies and is rolling out an advisory research assistant tool to all journalists. This tool flags suspicious emails, checks sender legitimacy, and searches for an internet footprint beyond circular references. "It’s not a yes or no — it’s a 'here are some red flags, you need to go check,'" Rogers explained.

Training and a culture of verification are central, with Reach adopting a "buy and build" approach to AI tools— developing in-house solutions like Guten for content repurposing and ensuring rigorous governance around privacy and source protection. Every new tool is put through a strict vetting process, with particular attention paid to how data is handled and whether sensitive information could be exposed.

Rogers stressed that while AI can streamline workflows and automate repetitive tasks, it is not a "miracle engine".

"They will cause you problems. They could save you effort. You have to get the balance right,” he cautioned.

Immediate Media: Guardrails, transparency, and internal innovation

Roxanne Fisher, director of digital & AI content strategy at magazine publisher Immediate Media, described a parallel journey rooted in transparency, inclusivity, and responsible experimentation.

Immediate Media has been public about its AI use from the beginning, establishing clear guardrails for what is and isn’t allowed. "We’re very careful to avoid AI for AI’s sake," Fisher said, stressing that all applications must start with user needs, whether those users are creators or readers.

To address concerns about data security and external models, Immediate Media built "First Draft", an internal tool trained solely on the company’s own content archives. This tool supports research and repurposing, providing full citations and references for every output. "We talk about being here for assisted, not generated AI content," Fisher explained, with all AI outputs subject to human oversight.

Training is a major focus, with "immersion days" bringing in external speakers to talk about ethics and confidence-building around AI use.

"We'd much rather be part of the conversation and the problem-solving of how we use [AI] in the best way possible to enhance what we do, not obliterate it," she concluded.

This article was drafted by an AI assistant with a lot of human prompting, before it was edited by a human.

Share with a colleague

Follow the changes: Nine ways web archives are used in digital investigations

From manosphere to femicide: Investigating misogyny and violence against women

Humour and survival: A visual essay of US local newsrooms


© journalism.co.uk