The Many Ways Chatbot Tools Can Manipulate Us
'Frictionless' exchange and sycophantic interactions may have long-term effects on our social lives.
More people are relying on AI assistants for daily 'interpersonal continuity,' yet few policies address this.
LLMs are vulnerable to manipulation themselves by commercial interests and bad actors.
As we continue our headlong rush into our new “chatbot culture,” where Silicon Valley companies are pushing the use of AI assistants into virtually every corner of our lives, we are realizing real productivity and quality-of-life benefits. Yet just as apparent are the psychological risks of manipulation arising from the very structure of large language model-based tools themselves. We allow Big Tech to ignore or minimize these risks at our own peril. At the very least, we should be informed and clear-eyed about these risks and how they are baked into the design of chatbot tools.
Some AI ethics concerns have been getting considerable attention. Chatbot tools are reporting decreasing fabrication rates, but since we are being urged to rely on them, error rates are still a problem. Even if Google’s AI Overview tool is accurate 9 out of 10 times, as a recent analysis reported, this still means that it is providing tens of millions of wrong answers every hour as it processes more than 5 trillion searches a year (Mickel et al., 2026). Ethicists also have sounded the alarm over disturbing examples of cognitive de-skilling or creative dispossession that result from reliance on AI assistants. And they are calling attention to the risks associated with ill-considered or unjustified anthropomorphic features that exploit our tendency to over-trust “social” or human-like interfaces.
There are many other ethical problems in the design and implementation of LLM-based tools, but three concerns have arguably gained a sense of urgency:
The push to develop “frictionless” interactive experiences and the resulting problem of sycophancy;
The failure of AI developers to provide distinct use parameters for short-term task-based interaction and long-term use that creates what’s called “interpersonal continuity”;
The vulnerability of LLMs to be “gamed” by commercial interests and bad actors.
Sycophancy and frictionless design
We all love to be flattered, to be told that we’re right, that our opinions are brilliant. Chatbot developers know this, and that’s why they regularly adjust levels of “agreeableness” exhibited through algorithmic systems. Too little agreeableness and an AI assistant will discourage our engagement by challenging anything in our prompts. Too much and the chatbot becomes sycophantic. Get it just right, and we stay there and boost their product’s useful and profit potential. OpenAI made national news when it dramatically dialed back agreeableness levels from its ChatGPT-4 version to its 5 version last year, leaving millions of users traumatized when their previously amiable or even intimate AI accounts went “cold.” As users, we must keep in mind that agreeableness levels can vary widely by company, and that they do not exist to make us happy but to serve the commercial interest that these companies have in our continued engagement. Research has also raised questions about the serious effects that our reliance on “frictionless” chatbots may have on our real lives. Difference and dissonance are common features in our real relationships and are actually necessary for our social and cognitive development. When we opt for chatbot agreeableness over the messiness of real life, real harm can occur (Cheng et al., 2026).
Discrete tasks versus relationship-building
It is one thing to use a chatbot to offer an analysis of a complex corporate spreadsheet in narrative form. It is quite another to harness the “conversational” capacities of an AI assistant and disclose intimacies, seek life coaching, or build actual companionship across a span of time. Yet most LLM-based tools are geared toward performing well with the former, and developers have largely ignored what might happen when people use chatbots for what’s called “interpersonal continuity.” The 2024 report by Google DeepMind, “The Ethics of Advanced AI Assistants,” documents this concern:
Existing economic incentives and oversimplified models of human beings have led to the development and deployment of technologies that meet users’ short-term wants and needs,… so they tend to be adopted and liked by users. However, in this way we may neglect considerations around the impact that human-technology relationships can have on users over time and how long-term beneficial dynamics can be sustained.
Existing economic incentives and oversimplified models of human beings have led to the development and deployment of technologies that meet users’ short-term wants and needs,… so they tend to be adopted and liked by users. However, in this way we may neglect considerations around the impact that human-technology relationships can have on users over time and how long-term beneficial dynamics can be sustained.
How much should AI assistants be personalized? Should time limits or pop-up warnings be used? Should accounts be “aligned” with user preferences? Just as with the development of social media platforms in the early 2000s, when engagement is used as the only measure of success, long-term harms tend to be minimized.
LLMs are getting gamed
Remember all the hand-wringing over how businesses sought to manipulate Google’s search engine to ensure high placement on search results? Google has been forced to build elaborate safeguards to ward off bad actors and protect its search integrity. Now, we are seeing how both bad actors and corporate interests can manipulate our chatbot prompt results. Have you heard of bixonimania? It’s a troublesome skin condition that has gotten a lot of attention recently. Except there’s no such thing as bixonimania. It’s a fake condition dreamed up by a Swedish medical researcher to test whether LLMs would pick up on the example misinformation and incorporate it in their health advice. Osmanovic Thunström uploaded two fake studies about it and then watched as chatbots—including Microsoft’s Copilot, Google’s Gemini, and OpenAI’s ChatGPT—took it seriously and began repeating it (Stokel-Walker, 2026).
Chatbot effectiveness depends heavily on the quality of the prompts we use. Since few of us are experts, chatbot services and “prompt libraries” have emerged. GoDaddy, for example, offers a prompt library and urges users to “use these AI prompts to help boost your advertising strategy” (GoDaddy.com). Many such aids can be useful, but they also can promote language, even single words, to favor specific brands or services without users’ knowledge. In a recent study on the effectiveness of such “perturbed” prompts, even subtle synonym replacements in prompts can increase the likelihood (in some cases, up to 78 percent) that a chatbot will mention a specific brand (Lin et al., 2025). Such manipulative prompt suggestions “give the appearance of a personalized chatbot experience while ultimately undermining users’ autonomy.”
These and other examples of “large language manipulations” demonstrate the urgent need for greater ethical deliberation on both the design and use of AI assistants.
Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., & Jurafsky, D. (2026, March 26). Sycophantic AI decreases prosocial intentions and promotes dependence. Science 391 (6792). Available: https://www.science.org/doi/10.1126/science.aec8352
GoDaddy.com. AI Prompts for Ad Campaigns. Retrieved 22 April 2026. Available: https://www.godaddy.com/resources/ai-prompts-for-ad-campaigns
Lin, W., Gerchanovsky, A., Akgul, O., Bauer, L., Fredrikson, M., & Wang, Z. (2025). LLM whisperer: An inconspicuous attack to bias LLM responses. Presented at the 2025 Computer Human Interaction conference, Yokohama, Japan.
Stokel-Walker, C. (2026, April 7). Scientists invented a fake disease. AI told people it was real. Nature. Available: https://www.nature.com/articles/d41586-026-01100-y
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
