menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

I completely missed what ChatGPT was doing to me—until an 11-minute phone call made it painfully obvious

55 25
22.02.2026

I’ve been using ChatGPT and other AI tools recently for quite a few things. A few examples:

Working on strategy and operations for my latest business venture, Life Story Magic. Planning how to get the most value out of the Epic ski pass I bought for the year, while balancing everything else. Putting together a stretching and DIY physical therapy plan to get my shoulders feeling better during gym workouts.

Along the way, I’ve done what I think a lot of AI power users eventually wind up doing: I’ve gone into the personalization and settings and told the chatbot to be neutral, direct, and just-the-facts.

I don’t want a chatbot that tells me “That is a brilliant idea!” every time I explore a tweak to my business strategy. They’re not all brilliant, I assure you.

And I don’t want a lecture about how if I truly have shoulder issues I should see a “real” physical therapist. I’m an adult. I’m not outsourcing my judgment to a robot.

“Stop. I didn’t ask you that”

The result of all this is that I’ve developed an alpha relationship with AI.

I tell it what to do. If it goes on too long, if it assumes I agree with its suggestions, or starts padding its answers with unnecessary niceties, I shut it down.

“Stop. I didn’t ask you that.” “No. Wrong. Listen to what I’m saying before replying.” “All I need from you are the following three things. Nothing else.”

As ChatGPT itself repeatedly reminds me, it has no feelings. Here—I even asked it to confirm while writing this article:

Expand to continue reading ↓


© Fast Company