menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Complete This 10-Decision Audit to See Whether AI Is Secretly Running Your Startup

4 0
13.04.2026

Complete This 10-Decision Audit to See Whether AI Is Secretly Running Your Startup

AI has a tendency to tell leaders what they want to hear—a deep positivity bias.

EXPERT OPINION BY DAVE KERPEN, CEO, KERPEN VENTURES @DAVEKERPEN

Illustration: Inc; Photo: Getty Images

Most startup founders don’t notice when the tool stops advising and starts deciding. Anat Baron didn’t expect to spend part of her day refereeing an argument between three AIs and her developer. But that’s exactly what happened. She had been using ChatGPT, Claude, and Gemini as an informal advisory board alongside her human team while working through a repositioning project. Then all three models came back with the same conclusion: her developer had made a mistake.

The developer pushed back. Baron pulled up the code herself. There it was. The code was right. It was just nested differently than the models expected. Once she showed them the proof, all three reversed themselves.

When she told me this story, the line I couldn’t shake was this: “Three different models, all certain, all wrong. Someone still has to be able to challenge the answer.”

That’s the part I keep coming back to. Because in a lot of companies, nobody has been assigned that job. The recommendation comes in. It sounds polished. It sounds reasonable. It sounds right. So it moves forward. That’s how the shift happens. Baron has a name for it. She calls it authority drift.

How Anthropic's Claude AI Became a Co-Founder

I hear versions of this from founders and executives all the time. They think they’re using AI as a tool. Then they realize no one has been clear about when the tool stops advising and starts deciding. Baron is a tech founder and former CEO who scaled Mike’s Hard Lemonade into a $200 million brand. She’s spent her career making consequential calls with incomplete information. She now advises executive teams on AI leadership and the future of work. What she keeps seeing isn’t leaders deliberately handing decisions to AI. It’s something subtler.

Leaders start using AI for input. The outputs are fast, confident, and often genuinely useful. Trust builds. Questioning slows. The recommendation stops feeling like advice and starts functioning like the answer. Nobody decided that. It just happened.

When advice becomes authority

Most organizations have answered a familiar question: What can AI do? Far fewer have answered the harder one: What should AI decide? One is about capability. The other is about accountability. In the rush to scale AI and show results to boards and investors, the first gets answered while the second gets skipped.


© Inc.com