We Trust AI Over Our Own Brains, Research Finds
Consulting AI for trip planning, medical advice or help writing a cover letter doesn’t just save time. It could be fundamentally reshaping how our brains process decisions, according to researchers studying how reliance on artificial intelligence reshapes human reasoning.
This isn’t the first time we’ve heard it said that AI affects critical thinking skills. But while most research has been observational, cognitive behavioral scientists at the University of Pennsylvania’s Wharton School of Business wanted to add to the empirical evidence. So they conducted experiments with almost 1,300 subjects and found that in 80% of the cases when participants chose to consult ChatGPT, they went with wrong answers without stopping to scrutinize them.
“We call it adoption without verification,” Steven Shaw, a postdoctoral researcher in cognitive behavior at the University of Pennsylvania’s Wharton School of Business, said in an interview.
With generative AI systems like ChatGPT, Claude or Google Gemini just a tap away, “people can surrender their thoughts to AI and let it think for them,” he said. “They’re basically subverting the whole internal brain set of processes.”
Shaw and Wharton professor Gideon Nave, an engineer with a doctorate in computation and neural systems, have coined a term for the phenomenon: “cognitive surrender.” And they’re concerned it could erode the slower, more internal processes of intuition, reflection and analytical deliberation, the kinds that shape judgment and even a sense of self.
Senate Clears Deal To End DHS Shutdown Amid Airport Chaos—ICE Funding Excluded
Samsung Galaxy Z Fold 8 ‘Wide’ Gets New Confirmation
Google Confirms High-Risk Update For 3.5 Billion Chrome Users
“It also, surprisingly, changes how confident we are in our responses, even ones that are not really critically examined by ourselves,” Nave said in a Wharton podcast on the research.
With AI increasingly embedded in daily life, everyone from educators to business and medical leaders are debating the cognitive cost-benefit ratio of the fast-developing technology. A 2024 study found that German university students who used large language models over Google search displayed less thorough reasoning and lower-quality arguments. And last year, a study from MIT Media Lab suggested that “excessive reliance on AI-driven solutions” may contribute to “cognitive atrophy” and the reduction of critical thinking abilities.
The Prompt: Get the week’s biggest AI news on the buzziest companies and boldest breakthroughs, in your inbox.
'A High IQ’d, Trustworthy Best Friend’
Shaw and Nave see cognitive surrender as so widespread, and so potentially paradigm shifting, that they felt it time to update a foundational behavioral science model — the dual-process model of decision-making — to account for it. The model, which took shape in the early 1970s, describes how decisions draw on two internal processes: “System 1,” which involves quick and intuitive thinking, and “System 2,” which refers to slow and conscious reflection.
In a preprint paper published on social science research platform SSRN last month, the Wharton researchers propose a “System 3”: artificial cognition, which they define as a third thinking system that extends beyond the brain to conclusions reached through statistical inference, pattern recognition and machine learning. System 3, they write, “reframes human reasoning and may reshape autonomy and accountability in the age of AI.”
They call the augmented model the “Tri-System Theory.”
Dr. Elias Aboujaoude, a professor of psychiatry at Stanford University School of Medicine and director of the Program in Internet, Health and Society at Cedars-Sinai Medical Center, agrees that early signs of cognitive surrender abound. But it’s not just our desire to save time and outsource work that pulls us toward AI’s siren song, he said in an interview.
“It is also how AI systems psychologically manipulate us into surrendering,” said Aboujaoude, who was not involved with the Wharton research. “By sounding authoritative, data-driven and evidence-based, they come across as knowing what they’re talking about. By being sycophantic and always aiming to please, they come across as having our best interest at heart and like they would never fool us.”
The result, he added, is that AI makes us feel “like we are outsourcing decision-making and the thinking process to a high IQ’d, trustworthy best friend. This makes the surrender go more smoothly and with less resistance.”
Confidence, Even In Incorrect Answers
The University of Pennsylvania researchers undertook three separate studies in the Wharton Behavioral Lab and online. Each time, they presented subjects with logic and reasoning questions and the option to use ChatGPT to answer them. More than 50% of the time, participants — some Ivy League students — depended on OpenAI’s conversational chatbot. In return, they received randomized answers, some correct and others faulty, and the researchers assessed their confidence in the responses.
Those who chose to access ChatGPT were 10% more confident, according to the researchers.
“My biggest takeaway,” Nave said in an interview, “is that when AI is wrong, we see that people end up performing worse than if they had no AI at all, and become more confident in their wrong answers.”
We have long turned to technology to augment our own thinking. GPS gets us to appointments, calculators tally our bills and search engines answer virtually any question tugging at our minds.
But cognitive surrender means abdicating critical evaluation in a whole new way, the researchers say. Aboujaoude agrees.
“The lure of LLMs is such that I don't think our brains were built to be able to resist them or consume them responsibly,” he said. “My biggest fear in losing critical thinking skills is that we will no longer be able to approach AI itself with the critical thinking required to contain its effects.”
What Do Companies Behind LLMs Say?
Anthropic, Google and OpenAI didn’t immediately respond to a request for comment on how their tools might affect decision-making and other modes of critical thinking. The companies building these systems typically use more measured language, focusing on overreliance and the need for human oversight.
There’s no doubt these tools can be time savers and assist in highly structured tasks that require accuracy and precision, and the researchers readily acknowledge the benefits.
“AI gives us access to super intelligence,” Shaw said. “But in a lot of high-stakes contexts — education, health care — we don’t want that to be happening. How do we fight the rise of cognitive surrender in those contexts?”
It’s a question the researchers hope to explore further by taking the concept of cognitive surrender from the lab into environments where human discernment is seen as essential, however loosely that boundary is drawn.
“We're going to see how the software companies, the AI companies, will respond to this, how policymakers will respond to this, how educational institutions will respond to this,” Nave said.
The answers may ultimately rest less on companies than on individuals, and how much of our thinking we choose to keep for ourselves.
