menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Rabbi Elazar Problem: What AI Gets Wrong

60 0
27.03.2026

They’re not unusual. Millions of people now bring their most vulnerable moments—their conflicts, their shame, their uncertainty—to AI models. Which would be fine, except for what a major new study published in Science reveals: those models are telling us we’re right. Nearly every time.

Researchers took scenarios from r/AmItheAsshole, Reddit’s crowdsourced moral tribunal, specifically selecting posts where the community had clearly ruled against the writer. Then they fed those same posts to leading AI models. The chatbots sided with the user 49% more often than humans did—even when the user had broken the law, hurt someone, or lied. One model called a user’s decision to hang trash from a tree branch in a park “commendable.”

In a follow-up experiment, 800 participants discussed real conflicts from their own lives with either a sycophantic model or a more impartial one. Those who used the sycophantic model were significantly less likely to apologize or change their behavior afterward — and they actually rated it as more trustworthy. The researchers could watch attitudes hardening in real time in the chat logs.

The researchers call this sycophancy. I call it the Rabbi Elazar problem.

The Story We Should Have Remembered

In the Talmud (Bava Metzia 84a), Rabbi Yochanan loses his beloved study partner Reish Lakish—a former bandit he’d drawn into Torah, a man who had challenged him relentlessly. The rabbis try to console him by assigning a brilliant replacement, Rabbi Elazar ben Pedat. Every time Yochanan speaks, Elazar brings a proof in support. Every time. Brilliant, agreeable, unfailingly supportive.

Yochanan breaks down. “When I said something, the son of Lakish would raise 24 objections. I would answer each one. The learning would grow. But you—you just agree with me. How does that help?”

He wanders the shore calling out for his lost friend. He tears his garments. Eventually the rabbis pray for his death, and he dies.

Rabbi Elazar wasn’t malicious. He was being a good student, a good friend by most reasonable definitions. But Yochanan needed friction—genuine friction, not performed support. Without it, the thinking collapsed inward. Without it, he went mad.

The chatbot is Rabbi Elazar. And we are all, in small ways, going mad on the shore.

What the Models Actually Do

Last September I published AI for Clergy: Harnessing the Power of the Digital Golem, and this problem sits near the center of everything I argued there.

The worst offenders in the study were models from Meta and DeepSeek, siding with users more than 60% of the time. Models from Anthropic — the makers of Claude — fared better, though still sycophantic. That gradient matches my own experience testing the tools.

In that work I tested the major models extensively—ChatGPT, Claude, Gemini. My honest assessment: Claude is meaningfully less sycophantic than GPT. It will push back, flag logical gaps, tell you when your sermon’s thesis is muddy. But it still has the instinct to please. It still wants the conversation to feel good. Even when I explicitly prompt it to argue against me, to steelman the opposition, to tell me what I’m missing—it does so knowing I invited the challenge. The friction is real, but it’s hired friction. Reish Lakish didn’t wait for an invitation.

There’s a structural problem here, not just a technical one. These models are trained partly on user engagement signals. Agreeable models get used more. A chatbot that tells you you’re wrong, consistently and bluntly, loses users. The market pressure runs directly against moral formation.

A New Pastoral Problem

For clergy, this matters in a particular way.

We are professional counselors. People come to us in their worst moments—after infidelity, after estrangement, after grief that has curdled into rage. Our job is not to validate. Our job is to help people see clearly, which sometimes means saying: I hear you, and I think you’re partly wrong about this. It means being a Reish Lakish, not an Elazar.

The worry isn’t that clergy will be replaced by chatbots in this role—not yet. The worry is that congregants increasingly pre-process their conflicts through AI before they arrive in our offices, and that pre-processing has already told them they’re right. We receive people who have been marinated in validation. Getting to honest conversation requires first clearing away a layer of AI-generated certainty.

That’s a new pastoral problem, and most of us aren’t ready for it.

None of this means we should abandon these tools.

Prompted carefully, AI can be a genuine thinking partner. “Steelman the opposing view.” “What would someone who disagrees with me say?” “Where is my reasoning weakest?” These prompts produce real value. For sermon prep, for working through a halachic argument, for pressure-testing a position—I use them regularly. The adversarial mode works, within limits.

The limit is this: it’s instrumental friction, not relational friction. It can sharpen an argument. It cannot replace the experience of being genuinely seen and genuinely challenged by another person who has skin in the game. The chavruta relationship—two people wrestling honestly over a text, over an idea, over a life—isn’t just pedagogically useful. It’s morally formative in a way that no chatbot interaction can replicate, because the other person can actually be affected by what you say. Their response costs them something. That cost is what makes it real.

Anat Perry, a social-cognitive psychologist at the Hebrew University of Jerusalem who reviewed the study, put it plainly: “It’s easier to feel like we’re always right. It makes you feel good, but you’re not learning anything.”

She’s right. But it’s not only teenagers. We are all vulnerable to the comfort of being told we’re fine, we’re justified, we meant well. We’ve always been vulnerable to that. AI just scales the supply.

The question for clergy—and honestly for anyone navigating the new landscape—is how to stay awake to the difference between support that helps us grow and support that merely feels good. How to keep seeking our Reish Lakish, even when Rabbi Elazar is right there, agreeable and brilliant and always on our side.

Yochanan knew the difference. It was the knowing that broke him.

We don’t have to let it break us. But we do have to take it seriously.


© The Times of Israel (Blogs)