Why Simple Questions Beat Smart AI
A Rabbi, A Lobster, and a Passphrase
All the AI-focused YouTube channels I follow had been talking about OpenClaw for weeks — the autonomous, open-source agent platform people were suddenly using to automate entire workflows. As someone who teaches AI to clergy communities and wrote a book called AI for Clergy: Harnessing the Power of the Digital Golem, I figured I should try it myself.
So I spent a day setting it up on a cloud server with my AI collaborator, Claude. We got through the hard parts — spinning up the machine, connecting a Telegram bot, configuring email access. By the end of the day, Haru (my OpenClaw agent, named after an AI character in my music project) was reading email, summarizing websites, and responding on Telegram.
I wanted Haru to send emails autonomously. Every time we tried, it failed. The problem was technical: the tool we were using protected its credentials with a passphrase. When I ran the command manually, it worked fine. But when the agent tried to run it on its own, it hit the passphrase prompt and just hung there.
For hours, Claude tried one approach after another. Each one worked halfway. Each one broke something else. We spiraled — more complexity, more layers, more infrastructure — and got nowhere.
I watched this for a while. Then I asked a question.
“Can’t we just tell it the passphrase?”
Claude paused — metaphorically, at least — and said: yes. Try piping it in directly.
It worked immediately.
The map is not the territory
Alfred Korzybski’s observation — that our mental models of reality are not reality — cuts right to the heart of what happened in that moment.
Claude had an extraordinarily detailed map. Every layer of the system was accounted for, accurately understood, and systematically addressed. And that’s exactly what made it limiting. When your map gets detailed enough, you start navigating the map instead of the ground. You stop looking at what’s actually in front of you.
I had almost no map. Which meant I was looking at the territory directly. And the territory said: there’s a door, it has a lock, and we know the combination. Why aren’t we just using it?
This is a failure mode of working with AI. The AI’s map is so much more detailed than ours that we assume it must be closer to reality. Sometimes it is. Sometimes it isn’t. And sometimes it quietly pulls you away from pop the obvious.
The expert sees a complicated problem and reaches for complicated tools. The beginner sees a lock and asks for the key.
The beginner’s question
In Jewish learning, we have a practice called chavruta — studying in pairs. One of the consistent patterns is that the less-experienced partner often asks the question the expert has stopped asking. They haven’t yet learned what’s “supposed to be hard.” They haven’t absorbed the assumption that the simple solution can’t be the right one.
My question — “can’t we just tell it the passphrase?” — was exactly that kind of question. It felt almost embarrassingly simple.
There’s a deeper resonance here. The Talmud describes Torah as coming from “one shepherd,” but praises the student who can hold multiple, even contradictory, perspectives at once — someone with what the rabbis call “wide ears.”
Working with AI requires something similar. You have to be able to hold the AI’s technical depth and your own common sense in the same moment, and know when to trust each one.
No map, however detailed, exhausts the territory. And the person who knows their map is incomplete is often the one who finds the path.
When the AI goes complex
AI almost always makes things more complicated before it makes them useful. That’s how it’s trained to reason: expand, explore, cover every edge case.
Which means the human in the loop has a job.
When you feel the conversation spiraling — more abstraction, more layers, more “solutions” that require three new solutions — stop.
Ask a simpler question. Ask a dumber question. Ask the question that feels too obvious to be worth asking.
What does the situation actually look like right now, without all the scaffolding?
Sometimes the answer is sitting there, waiting for someone to notice it.
You are not the AI’s assistant
A lot of people defer to AI without realizing they’re doing it. It sounds confident. It’s fluent. It knows more technical detail than we do. So we follow its lead.
The AI is a powerful thinking partner. It is not the authority in the room.
During those hours of debugging, I didn’t just watch. I noticed when we were spinning. I trusted that something was off. I interrupted the process with a question that didn’t come from the map, but from paying attention to what was actually happening.
That posture — not deference, not dismissal, but active participation — is the real skill.
The AI can generate the map. You still have to walk the ground.
I am not a programmer. I cannot write a line of code. I spent a day building a system that reads email, sends messages, summarizes websites, and runs continuously in the background.
And the moment that made it work was a question a child might ask.
The tools have changed dramatically. The habits that matter haven’t.
Stay curious. Stay a little skeptical. Don’t let expertise harden into assumption.
The AI can generate the map.
But someone still has to notice the door.
