menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

When Sounding Good Replaces Thinking Well

12 1
21.12.2025

First, have a large language model write it, then have another program humanize it. That's a curious trend I'm seeing today, and, at least to me, it's concerning. It's not because it feels scandalous or new, but because it has become oddly normalized in the context of making computer-generated text seem "more human." And these sites are widely available and promoted. Here's a quote from Humanize AI that sums up their promoted role.

Transform your AI-generated content into natural, human-like text with the ultimate Humanize AI text tool. This ai-to-human text converter effortlessly converts output from ChatGPT, Bard, Jasper, Grammarly, GPT4, and other AI text generators into text indistinguishable from human writing. Achieve 100% originality and enhance your content creation with the best Humanize AI solution available.

The purpose isn’t subtle; it's to help users conceal the fact that a large language model was involved at all. In a way, it's not a humanizer, but a dehumanizer.

At first glance, this might seem like a superficial concern. Writing has always been edited, and content has always been influenced by a wide variety of sources, some human and some not. Verbosity has never been proof of originality. But this moment feels different to me. What’s being optimized here isn’t clarity or insight. It’s plausible authorship, and that marks a critical shift in how we relate to language itself.

I think it's fair to say that language functions as evidence of thought. Not perfect evidence, and certainly not infallible, but evidence nonetheless. In the past, words carried traces of a path that could often include the struggle of........

© Psychology Today