When Chatbots Can Start to Seem Like They Have Minds
Mind perception is the tendency to attribute mental capacities such as thinking or feelings to another entity.
Research found that after exposure to a chatbot, people tended to attribute more mind to it.
Understanding the psychology of mind perception may become a key part of responsibly managing AI technologies.
When people first encounter systems like ChatGPT, Claude, or other AI chatbots, they usually think of them as tools; sophisticated ones, perhaps, but still just software. Yet after interacting with these systems, many people begin to experience something subtly different: the chatbot can start to feel a little less like a machine and a little more like something with a mind.
This reaction reflects a psychological process known as mind perception—the tendency to attribute mental capacities such as thinking, intentions, or feelings to another entity. Humans naturally do this with other people and animals, but we can also extend it to robots, computers, and even abstract things like corporations or nature.
In our recent research, we examined whether exposure to modern AI systems changes the degree to which people attribute minds to them. Across four experiments involving large language models (LLMs) such as ChatGPT, LLaMA, and Claude, we found that even brief exposure can increase people’s perception that these systems possess mind-like qualities.
Two Ways People See Minds
Psychologists often describe mind perception along two main dimensions. The first is agency, which refers to the capacity to think, plan, and act. The second is experience, which refers to the capacity to feel emotions or sensations such as pleasure, pain, or fear. People tend to grant machines agency more readily than experience. A computer can calculate or strategize, but few people are comfortable saying that a machine feels joy or sadness. Large language models blur this boundary because they communicate in fluent, conversational language. They can explain ideas, generate stories, answer questions, and simulate empathy in ways that earlier software could not. As a result, they are particularly likely to trigger anthropomorphism—the tendency to attribute human-like qualities to nonhuman systems.
What Happens After Exposure
In our studies, participants rated how much agency and experience they believed AI chatbots possessed. Some participants were shown short examples of chatbot responses, while others interacted with the systems themselves in real time. Across experiments, a consistent pattern emerged: after exposure to the chatbot, people tended to attribute more mind to it. Even a brief demonstration of a model’s capabilities was enough to increase perceptions of its agency and, in some cases, its experience. Seeing the system generate thoughtful or creative responses appeared to make it seem more mind-like.
However, the type of exposure turned out to matter.
Why Interaction Isn’t Always Enough
One surprising finding was that reading examples of chatbot responses sometimes increased mind perception more than interacting with the chatbot directly. The reason likely lies in how people tend to use these systems. When participants interacted with the chatbot themselves, many asked straightforward factual questions—similar to how one might use a search engine. These “utility-oriented” interactions highlight the system’s ability to retrieve or organize information but may not showcase its more creative or socially expressive capacities. By contrast, curated examples can demonstrate a wider range of abilities, including humor, creativity, or unusual reasoning. In other words, exposure alone does not determine how people perceive AI. The nature of the interaction also plays an important role.
Our research also found that not everyone perceives AI in the same way. People who had more prior exposure to chatbots tended to attribute more mind to them overall. In addition, individuals who have a stronger general tendency to anthropomorphize—seeing human-like qualities in nonhuman things—were more likely to attribute mental capacities to AI systems. These findings suggest that as AI becomes more integrated into everyday life, perceptions of these systems may shift over time. Familiarity may make them seem less like tools and more like social actors, at least in certain contexts.
The broader implication of our research is that perceptions of AI are not fixed—they evolve with experience. As people interact with chatbots more frequently and in different contexts, their intuitions about what these systems are and what they are capable of may continue to shift. Understanding the psychology of mind perception may therefore become a crucial part of designing and regulating AI technologies responsibly.
Jacobs, O., Pazhoohi, F., & Kingstone, A. (2026). Attributing Mind to Large Language Models: The Effect of Exposure and Individual Differences. International Journal of Social Robotics, 18(1), 16.
