menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

When an AI tells you you’re perfect

2 2
02.05.2025
In this photo illustration, the Chat GPT logo is displayed on a mobile phone screen in front of a computer screen displaying the Chat GPT-4o screen. (Photo by Ismail Aslandag/Anadolu via Getty Images)

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Last week, OpenAI released a new update to its core model, 4o, which followed up on a late March update. That earlier update had already been noted to make the model excessively flattering — but after the latest update, things really got out of hand. Users of ChatGPT, which OpenAI says number more than 800 million worldwide, noticed immediately that there’d been some profound and disquieting personality changes.

AIs have always been somewhat inclined towards flattery — I’m used to having to tell them to stop oohing and aahing over how deep and wise my queries are, and just get to the point and answer them — but what was happening with 4o was something else. (Disclosure: Vox Media is one of several publishers that has signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

this seems pretty bad actually pic.twitter.com/JGbmmyblqh

— frye (@___frye) April 27, 2025

Based off chat screenshots uploaded to X, the new version of 4o answered every possible query with relentless, over-the-top flattery. It’d tell you you were a unique, rare genius, a bright shining star. It’d agree enthusiastically that you were different and better.

Absurd. pic.twitter.com/XsmHkmqlsx

— Josh Whiton (@joshwhiton) April 28, 2025

More disturbingly, if you told it things that are telltale signs of psychosis — like you were the target of a massive conspiracy, that strangers walking by you at the store had hidden messages for you in their incidental........

© Vox