menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

With 1 AI Feature No One Asked For, Grammarly Burned Its Brand

10 0
12.03.2026

With 1 AI Feature No One Asked For, Grammarly Burned Its Brand

Using recognizable names to give your AI feature credibility without asking permission is a bad idea.

EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

CANADA – 2025/07/02: In this photo illustration, the Grammarly logo is seen displayed on a smartphone screen. (Photo Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images)

Until this week, I’ve never met anyone who didn’t like Grammarly. It’s one of the single most useful pieces of software I’ve ever used. It’s also not the kind of software that you would ever have a reason to be mad at. Sure, I mean, they charge a subscription fee, and no one likes subscriptions, but other than that, it just sits there and helps millions of people write better.

Because it was so helpful and easy to use, Grammarly is trusted by writers, brands, corporations, and students. That trust was built over more than a decade, one helpful suggestion at a time.

Then, with one feature, the company burned down much of that trust—and got almost nothing in return.

The feature nobody asked for

In August 2025, Grammarly—which has since changed the company’s name to Superhuman—launched something called Expert Review. The idea was that you could open your document, pick an expert from a list, and receive suggestions inspired by that person’s editorial voice.

How Canva Became the Power Player in the AI Design Wars

On paper, it sounds great. Writers spend a lot of time thinking about how their editors might react to a draft. Grammarly was giving everyone the opportunity to know what a seasoned journalist or editor might say.

The problem was that none of those experts had actually contributed to the feedback, or even consented to be included in the first place. Even worse, many of the examples weren’t even representative of the individuals at all.

That’s because those “experts” never agreed to any of it. Grammarly simply scraped their publicly available writing, trained AI models on it, attached their names to the output, and presented it to users as though these real people were involved. It looked a lot like a real person reviewing your work because that’s what Grammarly wanted you to think was happening.


© Inc.com