Review: A Cognitive Neuroscientist's Take on How AI Models Think
Entertainment
Review: A Cognitive Neuroscientist's Take on How AI Models Think
Author Christopher Summerfield engages seriously with skeptics who claim that large language models are really thinking.
Brian Doherty | From the May 2026 issue
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests
(Viking)
These Strange New Minds is a comprehensive book for lay readers wondering how large language models (LLMs) work and how they might help or harm human culture.
Its author, the cognitive neuroscientist Christopher Summerfield, faces an inherent challenge: The pace of change in AI makes it difficult for any traditionally published book to feel fully up to date. Books from major publishers can take more than a year to move from manuscript to finished copy. Summerfield addresses this by adding a later-written afterword noting that LLMs are already reasoning and conversing more effectively than they did just two years ago. They are becoming more "agentic," helping users accomplish tasks rather than merely answering prompts, while also becoming more capable tools for crime and fraud.
Summerfield does not believe LLMs will destroy humanity. But he makes clear that dismissing what they can already do, or what they are likely to do, is shortsighted. Anyone who organizes their work or daily life through computers should not ignore AI's looming impact. That remains true even if how "deep learning" achieves its results is still, in some respects, "mysterious."
Summerfield engages seriously with skeptics who claim that, because LLMs merely predict or echo patterns derived from the vast corpus of human writing on which they are trained, they are not truly thinking or meaningfully imitating the human mind. LLMs, he acknowledges, "work by multiplying together large matrices of numbers," while our brains operate through "electrical signals in an organic medium." But that does not mean the outcomes—effective understanding and communication—are always meaningfully distinguishable. To "say that LLMs do not think at all," Summerfield writes, "requires a new and rather convoluted definition of what it means to 'think.'"
Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.
Δ
URL
This field is for validation purposes and should be left unchanged.
Email(Required)
Subscribe
NEXT: Review: Giant Dramatizes Roald Dahl's Antisemitism Controversy
Brian Doherty is a senior editor at Reason and author of Ron Paul's Revolution: The Man and the Movement He Inspired (Broadside Books).
EntertainmentArtificial IntelligenceBook ReviewsbookReviewsStaff ReviewsShare on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests
Show Comments (0)
