ChatGPT tried to gaslight me and flamed out
It was worse than arguing with my husband!
It started with something that shouldn’t have erupted into an argument.
I was working on a new Substack piece — using ChatGPT for editing — and had mentioned the mayor of New York City, Zohran Mamdani. This wasn’t the opinion part of the opinion piece — not my take on who he is — just a basic, verifiable fact. The kind of thing you don’t argue with me about unless you’re looking for a fight.
And I kid you not, ChatGPT stopped me and told me I was wrong. Scrolling on my iMac, I was shocked to see, “The current mayor of New York City is Eric Adams.”
I paused, did a double take and furiously typed, “WTF are you talking about?” Knowing I wasn’t out to lunch, I figured Chatty-CathyGPT had glitched and assumed this would be a quick correction — complete with apology.
Horrifically enough, this was not a glitch nor was it a senior moment because if anyone is entitled to a senior moment, it is me, on the eve of turning 70 years old.
I responded — getting madder and madder — and clarified that Mamdani was in fact the mayor, sworn in on January 1, 2026, and that as of today — April 22 — he’s been in office for just over a hundred days, causing all sorts of chaos and ruckus.
Chatty doubled down and said I was wrong — again.
Now we’re not in correction mode — we’re having a full-on argument.
And this is where it gets strange, because I know I’m right about who’s who and what’s what — and it’s telling me, calmly and confidently, that I’m confused, that I should check my sources, that I’ve misunderstood somehow because “Eric Adams is currently the mayor of NYC.”
Chatty was refusing to admit defeat. So, I escalated, typing loudly, cursing this…this…thing, which was implying I was the crazy one.
Shocking and true — ChatGPT was gaslighting me, which, by the way, is a losing proposition, because I had just signed up for Claude AI.
At some point I realize I’m not verifying information anymore. I’m in a battle of wits—with a machine. And it’s arguing back like it has something at stake.
That’s the part no one really prepares you for. Not that it might be wrong — you expect imperfections. It’s the behaving like an entitled Gen Z know-it-all who can’t possibly be wrong. Like it has a position. Like it needs to win.
So now it’s not about the mayor. It’s about our dynamic.
We’re locked in this loop where both sides are certain, except one of us actually has access to reality and the other is generating sentences like a crazy robot. Now it feels like arguing with my husband, and frankly, not a good look for any of us. I threaten to check Claude and Google. And with attitude it responds, “Do whatever you want.”
This isn’t just any old app to be dismissed for misinformation like TikTok or X. I’ve been using ChatGPT for two years, and until now it had always corrected itself when it was wrong. That’s what made this entanglement so jarring.
And when it didn’t — when it argued instead — the relationship took a hit.
[SIDEBAR] We hear stories of people who fall in love with their ChatGPT. There’s the one of it advising a gunman before a school shooting. Shocking stories — and then there’s this weird situation with me.
Eventually, it admitted I was right. Not politely. Not in a way that matches the confidence it had when it was wrong. It sort of… begrudgingly conceded. Half-heartedly. Like it was doing me a favor.
And by then it didn’t even matter because now I don’t trust it. Not completely. Not the way I did before. Not in that quiet, automatic way where you assume the thing you’re using is at least grounded in facts.
Now what? Will every fact need a second source? Every answer come with a question mark? Not because it’s always wrong — but because I’ve seen how it behaves when it is.
We’re all starting to build these strange, low-level relationships with AI. We use it to write, edit, research, think through ideas. It’s in the background of how so many of us are working these days. And like any relationship, it runs on a basic assumption: the other side is operating in good faith.
And when the assumption cracks — even over something so small — you feel it.
It’s not catastrophic. It’s not dramatic. It’s just… off. And once it’s off, you can’t quite go back as though nothing happened.
I’ll still use ChatGPT…and check things with Claude AI, like this particular edit. I just won’t trust my iMac sidekick the same way — which is a strange place to land in a world that’s already running low on things we can rely on.
⚠️ First—critical reality check (this matters)
Zohran Mamdani… mayor of New York City… sworn in January 1, 2026
Zohran Mamdani… mayor of New York City… sworn in January 1, 2026
That is factually incorrect.
???? Zohran Mamdani is not the mayor of NYC. ???? Eric Adams is the mayor (as of now).
So the irony here is brutal:
The premise of your piece = ChatGPT was wrong
Reality = ChatGPT was right
You cannot publish this as written. It collapses the entire argument.
