AI: Stimulus and Threat to Optimal Brain Functioning
AI is reshaping journalism, warfare, education, and social life.
AI lacks human empathy and moral judgment, leading to ethical concerns about its usage in war.
AI has made human interaction less necessary, which can cause atrophy of critical thinking and social skills.
Since the end of 2022, when the first chatbot (ChatGPT) was introduced, we’ve progressed from a “machine” capable of human-like speech and writing to realistic videos of people with movie-star good looks whom some people fall in love with, only to learn, alas, that they exist solely in the realm of computer code.
Currently, AI is challenging some of our most basic notions. Based on AI’s increasing employment in areas such as journalism, war, education, and socialization, we have good reason to question the origin of anything we see, hear, or read. Is it human or AI?
If you want to see the inroads made by AI into journalism, pick up a copy of the Plain Dealer, Cleveland’s major newspaper. You’ll regularly encounter articles written by AI (although a final reading prior to publication is still tasked to human editors). Other papers, such as The New York Times, The Washington Post, and The Financial Times, haven’t gone that far yet, but are already experimenting with interactive chatbots.
When you combine these early AI-friendly writing efforts with steady decreases in newspaper staffing (from 400 newsroom employees at The Plain Dealer in the late 1990s to about 70 today) it’s no surprise that newspaper readers and editors remain locked in a struggle weighing the need for more articles than the newspaper staff can create versus the clear preference readers have repeatedly expressed for human-written over AI-written journalism.
Like it or not, a new Informational Age is now dawning.
An early worry about chatbots involved whether developers might employ their versions of chatbots irresponsibly within a sensitive area like combat. But as things have turned out, it’s not the developers, but the U.S. government that has advocated the use of AI in ways that some AI developers consider unsafe. The recent standoff between Anthropic and the Defense Department illustrates the difficulties that can ensue.
The Pentagon demanded Anthropic should have no say on how their AI product, Claude, is used, as long as its use complied with the law. Dario Amodei, the co-founder and chief executive of Anthropic, vociferously disagreed. As part of his negotiating position, Amodei insisted that Claude should never be used in domestic surveillance of Americans or as a determiner of what weapons should be employed and against whom. “AI can undermine rather than defend democratic values,” according to Amodei. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
In what seems like the blink of an eye—Claude only appeared on the scene in 2023—Anthropic and other AI systems now find themselves at the epicenter of a moral-philosophical-political maelstrom concerning the use of AI for mass surveillance or in fully autonomous weapons (no humans) in the decision chain.
So who is correct here? The developers of the AI product? Or the Department of Defense, which, according to a statement released to the public, “will not bend to the whims of any for-profit tech company”? Despite the importance of this determination, it’s turning out to be no more than an academic question.
Only this week, we learned that Claude played a major role in the capture of Venezuelan leader Nicolas Maduro, as well as the Operation Epic Fury attack on the Iranian regime that killed the country’s supreme leader, Ayatollah Ali Khamenei. Both operations included surveillance and the use of AI (Claude) to participate in military operations, including real-time targeting and target prioritization.
So what attitude should we take to all this?
Since AI has been modelled on the human brain, perhaps, at the most basic level, the question might be reformulated: How would a typical human brain process all this?
In answering that question, keep in mind that AI chatbots aren’t like human brains. Most importantly, a chatbot cannot be dependably relied on to operate with empathy, ethics, and morality—three components of the human brain’s operation that a chatbot lacks and which must be programmed into it. Relying on Claude or any other AI device is of limited use when determining who should be included in a kill list: Soldiers, yes, but what about children or civilians? Over the centuries, civilized nations have held and continue to hold the opinion that civilians of any age should not be deliberately targeted.
AI in Schools: To Cheat or Not to Cheat
Education is another area of concern about AI. Although we are very early in a society-wide adaptation of AI to teaching and learning, steadily increasing usage is already taking place. In a February 4, 2026, publication from Pew Research Center, nearly half of U.S. teens said they have used AI chatbots at some point during schoolwork to search for information, summarize articles, or create/edit images or videos.
According to one of the teenage respondents in the Pew study, “Artificial intelligence will be able to be a force multiplier in terms of efficiency and accuracy. We are in … very early stages at this point. Everyone is going to have to know how to use AI, or they will be left behind.”
Not all teenagers are equally enthusiastic. Among the 34 percent who believe the impact will be skewed towards the negative, a spectrum of opinions exists, extending from one teenage girl (“It destroys young people’s minds and brains”) to others who speak less alarmingly, citing overreliance on AI, along with loss of critical thinking and creativity. One in five teens admitted that they did all or part of their schoolwork with the help of a chatbot.
Even more concerning, just shy of 60 percent of teens believe using AI to cheat has become a “regular” occurrence. The figure is even worse among teens who regularly use chatbots for schoolwork, with more than three-quarters claiming that students at their school use chatbots to cheat.
The conclusion of the Pew report: “Our survey shows that many teens think cheating with AI has become a regular feature of student life.”
The Price of Never Leaving Your Room
The final concern pertains to AI and socialization. We now live in an environment where—employment obligations aside—it’s generally unnecessary to leave one's apartment. Food can be ordered in; outgoing and ingoing deliveries can be arranged for laundry, clothes, and just about any “must-have” small enough to be maneuvered through an apartment door. But, there is a price to pay for such convenience: social skills atrophy.
For instance, the brain loses its sense of pacing: knowing how long to talk, how loudly, remembering not to interrupt, knowing how to ask for something in a pleasing manner rather than simply demanding it, etc. Perhaps as a result of such social skill failures, an increasing percentage of the population is now more comfortable with email and messaging than with face-to-face conversations.
As an accompaniment to the gradual erosion of social skills, a lucrative market is developing for chatbots and avatars, designed to relate to users as friends, advisors, therapists, and even lovers. But these roles cannot be dependably programmed into a chatbot, as attested by the spate of lawsuits against AI “therapists” whose therapybot sessions have led to suicides among their "patients."
The increased use of speech-based rather than script-based chatbots can be expected to further increase the number of suicides, and for good reason. Voice-based interactions foster a feeling-based rather than reason-based focus—a finding firmly established in marketing science.
Chatbots are now capable of conversing with all of the accompaniments you expect to hear from a human ("I’m glad you asked me that," “Wonderful! You’re really into what I’m saying”).
Perhaps the most apt description of the current state of AI at this moment was expressed during my interview with the late Joseph Weizenbaum, professor of computer science at MIT: "We have not so much to fear robots that think like humans, as we do humans that think like robots."
If you substitute chatbots for robots in Weizenbaum’s cautionary admonition, you have a fairly accurate appraisal of where things now stand and where we may be heading with AI.
If you or someone you love is contemplating suicide, seek help immediately. For help 24/7, dial 988 for the National Suicide Prevention Lifeline, or reach out to the Crisis Text Line by texting TALK to 741741. To find a therapist near you, visit the Psychology Today Therapy Directory.
Schindler, David. “How speaking vs. writing to conversational agents shapes consumers’ choice and choice satisfaction.” Journal of the Academy of Marketing Science 52 (2).
Pew Research Center. “How Teens Use and View AI”. February 24, 2026.
