Why Expert Predictions So Often Fail
True expertise is about judgment under constraints, not predicting uncertain futures.
Prediction strips away the context and evidence that expert judgment depends on.
As problems move beyond familiar patterns, expert forecasts drift toward chance.
A while back, I came across an article titled “Trust the Experts? It’s a Bad Bet,” and at first glance it seemed to echo a familiar claim: that expertise itself has become less trustworthy[1]. But reading past the headline revealed a much narrower and very different argument. The real target wasn’t expertise in general; it was expert prediction. In essence, the claim was that experts are often poor forecasters of the future, and trusting their predictions is usually a losing proposition.
On that point, the author is largely right. Trusting expert predictions often is a bad idea. But it's not because true expertise itself is hollow or overrated. It’s because what makes someone a legitimate expert is rarely their ability to function as an oracle.
That's an important distinction. Judging expertise by its ability to predict uncertain futures almost guarantees disappointment—and it sets up a false dichotomy in the process: either we defer to expert authority, or we decide expertise itself is overrated. Neither position is especially useful.
The more interesting question isn’t whether we should “trust experts” in some broad, abstract sense. It’s what we should expect true expertise to actually deliver—and where its limits lie. Experts aren’t hired to see the future. They’re hired to help us make sense of the present: to diagnose problems we already face, interpret evidence we already have, and weigh tradeoffs under real constraints. When we confound that kind of judgment with prophecy, we end up misunderstanding both the value and the limits of expertise.
What Expertise Actually Is
Expertise isn’t primarily about prediction. It’s about judgment—specifically, the ability to recognize which kind of situation you’re in and to respond in ways that have worked before under similar conditions, also known as recognition-primed decision making. Across domains, what distinguishes experts from novices isn’t superior logic or broader knowledge in the abstract, but experience with recurring patterns. Over time, experts learn which cues matter, which ones can be ignored, and which tradeoffs are likely to follow from different courses of action—ultimately giving rise to what Kahneman & Klein (2009) describe as skilled intuition.
When faced with a problem, experts don’t start from scratch by weighing every possible option. Instead, they recognize familiar elements, generate a small set of plausible responses, and then evaluate whether those responses fit the specifics of the situation at hand. When the context aligns with their prior experience, this process can be remarkably fast and effective. It’s also why expert intuition can look effortless from the outside—much of the work has already been done through repeated exposure to similar problems.
But that strength comes with a clear limitation. Expert judgment is tightly bound to context. It works well within what I’ve referred to elsewhere as an expertise “bubble”: the range of situations where an expert’s experience provides reliable guidance. As problems move farther from that bubble—toward novel conditions, unfamiliar environments, or long time horizons—the cues that support recognition begin to fade. At that point, experts are no longer drawing on well-calibrated judgment; they’re extrapolating beyond what their experience can reliably support.
Why Expert Predictions Often Look Like a Coin Toss
Imagine an expert is brought into an organization to help solve a problem. They’re given a general description of what’s going wrong and then asked, on the spot, to say what the organization should do. No opportunity to dig deeper. No chance to gather additional information. Just a judgment call.
In that situation, even a genuine expert has little choice but to guess. Some of the assumptions they make will be reasonable. Others won’t. And unless the problem is unusually simple or well-structured, there’s no reason to expect the final judgment to be much better than chance. That isn’t a knock on expertise; it’s what happens when judgment is forced to operate without the evidence it needs to do its job.
What differentiates experts in real settings isn’t their ability to issue answers on demand. It’s their ability to guide the collection of evidence in ways that make judgment possible. They know which questions to ask next, which information is worth gathering, and when additional data will no longer meaningfully improve the decision. That process refines and calibrates the pattern recognition that underlies expert judgment.
Prediction short-circuits that process. When we ask experts to forecast the future, we give them a problem that is explicitly future-oriented and, by definition, thin on direct evidence. There’s no opportunity for guided information gathering, iteration, or meaningful feedback. The expert is forced to fill in the gaps with assumptions about how people, organizations, or systems will behave. Some of those assumptions will be right. Others won’t. Averaged out, accuracy drifts toward a coin toss.
This is why expert predictions so often disappoint us. It’s not that experts suddenly lose their competence when asked about the future. It’s that prediction removes the very mechanism that makes expertise useful in the first place: context-sensitive judgment refined through selective evidence. When we strip that away, what remains isn’t expert insight so much as guesswork—and the results look exactly like you’d expect.
What Expertise Is (and Isn’t) Good For
None of this means expertise is useless, or that skepticism toward experts is misplaced. It means we’ve been asking experts to do the wrong job. Expertise isn’t a crystal ball. It’s a way of making sense of messy situations in the present—of diagnosing problems, interpreting evidence, and helping decision-makers navigate tradeoffs under real constraints.
When used that way, expertise still matters a great deal. Experts are most valuable when they’re allowed to interact with a problem: to ask questions, gather targeted evidence, test assumptions, and revise their judgments as new information comes in. That’s where recognition-primed judgment shines—and where expert insight genuinely improves decisions.
Trouble starts when we collapse that process into a demand for prediction. When we treat experts as forecasters of uncertain futures, we strip away the context and evidence their judgment depends on, then fault them when accuracy collapses. The result is predictable disappointment, followed by an unhelpful backlash against expertise itself.
If we want better decisions, the solution isn’t to stop listening to experts—or to trust them blindly. It’s to stop confusing judgment with prophecy, and to expect expertise to deliver what it actually can: informed guidance about what to do now, not guarantees about what will happen next.
[1] I recently wrote a piece that touches on this issue: The Counterfeit Era of Expertise
