menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Publishers are finally getting serious about AI scraping

3 0
12.03.2026

Publishers are finally getting serious about AI scraping

After years of fragmented pushback, publishers are beginning to organize around a simple goal—making AI companies pay for access.

[Images: Sohel/Adobe Stock; Schedio/Adobe Stock]

Pete Pachal is a journalist and the creator of Media Copilot, a newsletter and podcast that examines how AI is changing media, journalism, and the news.

I think the strongest indicator of how normal using AI has become is the language we use as shorthand for it. It’s now extremely common for someone to say they asked “chat” for some piece of information. We all know what they mean.

But if you needed data on how popular AI portals are now, OpenAI provided it recently when the company revealed that ChatGPT has 900 million users, up from 800 million in the fall. Even if Gemini, Copilot, and Claude weren’t also rising (they are), that would be enough for the media—not to mention brands and marketing/PR agencies—to really understand how fast AI is growing as a discovery channel. Whether or not it’s a source of traffic doesn’t matter; it’s a meaningful layer between publishers and audiences.

That’s obviously the reason there’s been so much interest in the infant field of GEO (generative engine optimization) lately, and why I’ve written about it more than once in the past few months. But the focus on how to get AI search engines to notice and reference content doesn’t mean there shouldn’t be some kind of reckoning with how the content got there in the first place, and what—if any—value exchange that should trigger.

Subscribe to The Media Copilot

Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit mediacopilot.substack.com

Surveys, such as this one done by OnMessage last fall, consistently show the public believes content providers should be compensated when their content is scraped by AI engines. The AI industry tends to have a different view, often suggesting that “publicly available” data (i.e., stuff on the internet) is fair game. It’s more nuanced than that, of course, but the central issue is one of leverage: The AI companies have it, and publishers by and large don’t.

The push for a better bargain

A new industry coalition is looking to rebalance those scales. In late February, a group of U.K. media companies—including the BBC, the Financial Times, and The Guardian—announced they were forming SPUR, which stands for Standards for Publisher Usage Rights. In an open letter, the leaders of those companies articulated the group’s purpose: “to establish shared technical standards and responsible licensing frameworks that ensure AI developers can access high quality, reliable journalism in legitimate, responsible and convenient ways.”

In other words, SPUR is meant to help lead the publishing industry toward a better bargain between AI companies and the media. Currently, publishers have a hodgepodge of solutions: You could pursue a licensing deal with one of the big AI companies, an option available only to publishers above a certain size. You could sue the AI companies, an expensive proposition. Or you could try to defend your content through a combination of paywalls, bot-blocking protocols, and nascent technologies aimed at getting AI crawlers to pay for access.

Artificial Intelligence

Claire's went from tween mall icon to bankrupt — twice?


© Fast Company