menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

We tested out AFP's AI slop detection tips on our own AI-generated event write-up

12 0
11.03.2026

We attended a webinar organised by AI-transcription software Trint on detecting AI slop and best practices in generative AI, used an AI to draft an article with some human prompting, and then tested if another AI could identify its origins. The first half is mostly AI-generated from the event, the second half is a human analysis of how the AI performed

As of May 2025, more than half of all new online articles were generated by artificial intelligence, up from just a tenth at the end of 2022, according to SEO firm Graphite, reported by Dataconomy.

This rapid rise of AI-generated content – these days dubbed "AI slop" – is reshaping the information landscape and presenting challenges for journalists and newsrooms worldwide.

AFP (Agence France-Presse), the French news agency that reports in six languages, has had to meet these challenges head-on, namely by creating an AI literacy programme led by Sophie Nicholson.

She detailed how it all works in a recent webinar organised by AI transcription tool Trint and explained what it means for the future of trustworthy journalism.

A new era of misinformation and verification

Nicholson spent a decade working around mis- and disinformation. While deepfakes and sophisticated fakes are a concern, she described most AI slop as an extension of the same problem. It's mostly low-quality and misleading.

Misinformation remains simple to recognise, and is usually mislabeled or financially motivated. The only difference is the sheer scale and speed with which generative AI amplifies it. Newsrooms need to be more vigilant than ever.

Take the Ottawa food bank gaffe as a prime example back in 2023: Microsoft published an AI-generated travel article that bizarrely listed the food bank as a must-see tourist destination, even suggesting visitors go "on an empty stomach."

This wasn’t a sophisticated deepfake, but a context-blind, careless error. Microsoft's admission that "the article was not published by an unsupervised AI" shows that poor editorial oversight is usually to blame.

AI summaries on Grok and Google also tend to throw up inaccuracies or outright fabrications, so journalists and newsrooms must keep their wits about them.

AFP’s approach: Training, tools, and human oversight

To meet these challenges, AFP has invested heavily in AI literacy and verification training across its global newsroom. Nicholson explained that AFP has developed a structured programme, appointing 22 "AI ambassadors" in different countries to deliver training, share expertise, and update guidelines as technology evolves.

Key elements of AFP’s strategy include:

Robust verification workflows: Journalists are trained to use a combination of digital tools (such as InVID, WeVerify and SynthID by Google Gemini) and traditional reporting methods. However, Nicholson cautioned that these are not 100 per cent perfect, and human judgment and on-the-ground checks remain critical.

Human-in-the-loop processes: While AFP uses AI for efficiency in areas like translation and transcription, all content is subject to human review before publication. AFP does not publish AI-generated articles or images, and clear guidelines (see chapter 8) are in place to prevent over-reliance on automation.

Transparency and accountability: Mistakes happen, and it's better to be open about them than try to kick them under the rug.

"Some of the most positive comments we ever got were on corrections that we did," she says, ensuring that AFP hasn't been caught out on any major errors. "And if you trip up, it happens, but you're [being] authentic."

AFP's do's and don'ts on AI

Verify everything it spits out: Facts, names, dates, stats – you name it.

Use it for deep research: It can find things you may have missed, but make sure to double-check it.

"Experiment in low-risk areas": AI is good at retrieval, for example, fetching quotes out of a video file. Nicholson mentioned an internal AFP project where Notebook LM was used to interrogate large volumes of speeches from Colombian election candidates.

"Anthropomorphise AI": The tech isn't a source or colleague, don't presume it understands context, nuance and ethics. You don't have to remember your please and thank yous, either.

Let it handle your first draft: Let it check your first draft instead

Give it sensitive and confidential data: There's no guarantee how it is stored and used.

Putting the tips to the test

The irony isn't lost on us here. JournalismUK does use a custom AI assistant to help with some of its news coverage, and we used one for this article. But like any AI-generated article, it was edited heavily by a human editor before going out. This section - and through to the end - however, is written by a human. The above is a mix, as I'll explain below.

I wanted to test out some of the advice and broke Nicholson's rule to allow our AI to make the first draft, providing it with a transcript of the event. It was free to attend and freely distributed, so not sensitive.

I asked our AI to find a source confirming the "50 per cent AI content claim" above (which checked out) and then asked it to draft the article with this as the lead hook.

This article is free to read, sign up today

Already have an account? Sign in

Ipswich.co.uk arrives in the town centre: "We wanted somewhere that could be transformed"

From bullied autistic kid to TikToker: a journalist who went viral by being himself

IWD: 25 stand-out women in journalism


© journalism.co.uk