How Science Is Learning to Explore Ground Truth
Clinical intuition and mind wandering may be computationally optimal strategies for complex realities.
Random experimentation produced accurate accounts of reality across all conditions tested.
Random experimentation and unfocused thinking can benefit a person by challenging unconscious scripts.
Some clinicians have an uncanny quality. A colleague describes herself and others with this instinct as "witchy"—a capacity to know things about patients they haven't said yet, to follow a stray association to a song lyric or a half-remembered cultural reference and arrive, reliably, at something the patient urgently needed to say but couldn't reach on their own.
We see with artificial intelligence these intriguing possibilities for discovery, especially as connections that human beings never would see pop out of apparently unrelated data. Despite the risk of hallucination, we also see remarkable progress on the horizon.
AI mirrors this "witchy" instinct. In medicine, key findings have surfaced from apparently unrelated retinal scans (Zhou et al. 2023). RETFound, for example, predicted Parkinson's or heart attacks from eye images meant for glaucoma detection, which humans can't determine from routine exams. With safeguards against errors, AI promises to systematize serendipity, transforming clinical hunches into scalable discoveries for millions.
It is becoming increasingly evident that information surrounds us, and we can draw meaning from it if we know how to look.1
Demystifying Mysticism
A new paper in Collective Intelligence (Dubova, Moskvichev, & Zollman, 2026) may offer the first formal computational explanation for why this kind of knowing works—and why more disciplined, theory-driven approaches often fail in ways invisible to the people using them.
The researchers simulated scientific communities—agents collecting data, building theories, sharing findings—and pitted every major philosophical strategy for choosing experiments against a simple baseline: choosing at random. Over 9,000 simulations, the result held across every condition. Random experimenters developed categorically better theories of reality. The mechanism is uncomfortably simple: Theory-driven agents inadvertently curate their own evidence, sampling from regions their theories already explain, collecting progressively narrower data that happens to be easy to account for. The data they never collect is the data that would have forced their theories to grow.
A Walk in the Park in the Rain
Random sampling, by contrast, combines two properties that are genuinely hard to achieve on purpose: diversity of observations and representativeness of the underlying phenomenon. These turn out to be what matter most for building theories that capture reality rather than a convenient subset of it. The paper's most psychologically loaded finding: the strategy producing the highest self-assessed success produces the lowest actual success.
Theory-driven agents think they're doing beautifully—their data fits their theories with increasing elegance. They are substantially wrong about the world they're studying, and nothing in their experience tells them so. (The structural resemblance to what I've called Accomplishment Hallucination2 with AI is hard to miss.) Meanwhile, random experimenters look the least successful by their own metrics—their data is messier, harder to explain—while actually learning the most.
Now consider the "witchy" clinician. In psychoanalytic practice, following one's own unbidden associations—the wandering mind, a song lyric or poem, an old TV show, a dreamlike scene in reverie—almost without exception in my experience leads to material that would never have emerged from staying locked into the expected therapeutic thread, though this approach is not right for everyone.
The therapist who follows the theory-driven line of inquiry may be running Dubova's confirmation strategy: sampling from regions of the patient's psyche that the existing formulation already explains. The therapist who follows a seemingly random association is running the random strategy. A broader search with more representative sampling, exploring the available space better than directed approaches can, while opening up new spaces. This is consistent with Sigmund Freud's original theory, in which the patient is advised to free associate and the analyst listens without filtering the data, with evenly suspended attention.
There is a distinctive phenomenology to these moments. The Boston Change Process Study Group has described "moments of meeting" — charged instants where something shifts and both people recognize it. In my clinical experience, these arrive with heightened perception: a buzzing quality in mind and body, a brightening of the visual field, a narrow attentional lock on the patient coupled with a paradoxically broad sense of one's own awareness.
This hypothetically corresponds to neurodynamically-specific processes—perhaps a brief co-activation of the default mode and task-positive networks, which ordinarily suppress each other. Neither unfocused reverie nor concentrated analysis... perhaps both simultaneously.
Beyond the Consulting Room
The implications extend well beyond clinical work. Stuckness—one of the most common human complaints—is structurally identical to what Dubova's confirmation agents do to themselves. A person locked in self-attack samples only evidence confirming the worst theory about themselves.
Obsessional thinking traverses the same narrow territory, over and over (and over and over again and again). What often helps—mind wandering, music, nature, adopting new perspectives, exposure to the unfamiliar—is structurally reminiscent of a random strategy: broadening the sampling space so the system encounters observations its current model can't predict. At the same time, it cannot be truly random because the brain’s output is constrained in many ways... chaotically deterministic and complex but not generally truly random.
As Simple as Possible, but Not Too Simple
Let's be precise about what this is and isn't. This is not an argument for trusting your gut, for mysticism, for supernatural intuition. I've always found that a false dichotomy—the choice between irrational explanation and dismissive rationalism. The "witchy" clinician isn't channeling anything. She's running a computationally superior search algorithm, so to speak. This is something AI excels at doing.
The wandering mind isn't irrational—it's optimal for the kind of high-dimensional, multi-causal reality that human life actually presents. There's nothing mystical about it, though it is wondrous.
There is an irony at the meta-level. The scientific method, built on disproving the null hypothesis, is here complemented by an approach that effectively rejects the null hypothesis that rejecting the null hypothesis is the primary and best mode of inquiry.
The two methods are complementary. The question for folks feeling stuck—in a pattern of thinking, a relationship, a career—is whether we're running a confirmation process in our own lives. Sampling the same evidence, building a tighter account, feeling increasingly certain about an increasingly narrow picture. This research proposes a useful alternative to relying on mystification.
1. Brenner, GH. (2025). Neurofluidity: Playing With a Concept. Psychology Today.
2. Brenner, GH. (2026). Accomplishment Hallucination: When the Tool Uses You. Psychology Today.
3. Dubova, M., Moskvichev, A., & Zollman, K. (2026). Against theory-motivated experimentation: Can random experimental choice lead to better theories? Collective Intelligence, 5(1), 1–22.
4. Zhou, Y., Li, K., Liu, X., Zhang, Z., Cao, H., Milea, D., ... & Keane, P. A. (2023). A foundation model for generalizable disease detection from retinal images. *Nature*, *622*(7981), 156–163. https://doi.org/10.1038/s41586-023-06555-x
