menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Rabbi Elazar Problem: What AI Gets Wrong

65 0
27.03.2026

They’re not unusual. Millions of people now bring their most vulnerable moments—their conflicts, their shame, their uncertainty—to AI models. Which would be fine, except for what a major new study published in Science reveals: those models are telling us we’re right. Nearly every time.

Researchers took scenarios from r/AmItheAsshole, Reddit’s crowdsourced moral tribunal, specifically selecting posts where the community had clearly ruled against the writer. Then they fed those same posts to leading AI models. The chatbots sided with the user 49% more often than humans did—even when the user had broken the law, hurt someone, or lied. One model called a user’s decision to hang trash from a tree branch in a park “commendable.”

In a follow-up experiment, 800 participants discussed real conflicts from their own lives with either a sycophantic model or a more impartial one. Those who used the sycophantic model were significantly less likely to apologize or change their behavior afterward — and they actually rated it as more trustworthy. The researchers could watch attitudes hardening in real time in the chat logs.

The researchers call this sycophancy. I call it the Rabbi Elazar problem.

The Story We Should Have Remembered

In the Talmud (Bava Metzia 84a), Rabbi Yochanan loses his beloved study partner Reish Lakish—a former bandit he’d drawn into Torah, a man who had challenged him relentlessly. The rabbis try to console him by assigning a brilliant replacement, Rabbi Elazar ben Pedat. Every time Yochanan speaks, Elazar brings a proof in support. Every time. Brilliant,........

© The Times of Israel (Blogs)