menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Say please? The best way to talk to an AI

26 0
25.02.2026

Do you have to be polite to AI?

From being polite to pretending you're on Star Trek, the advice you get about talking to chatbots can be truly bizarre, and totally useless. Here's what actually works.

When a group of researchers decided to test whether "positive thinking" made AI chatbots more accurate, it led to some surprising results. As they asked various chatbots questions, they tried calling the AIs "smart", encouraged them to think carefully and even ended their questions with "This will be fun!" None of it made a consistent difference, but one technique stood out. When they made an artificial intelligence pretend it was on Star Trek, it got better at basic maths. Beam me up, I guess.

People have all sorts of bizarre strategies to get better responses from large language models (LLMs), the AI technology behind tools like ChatGPT. Some swear AI does better if you threaten it, others think chatbots are more cooperative if you're polite and some people ask the robots to role-play as experts in whatever subject they're working on. The list goes on. It's part of the mythology around "prompt engineering" or "context engineering" – different ways to construct instructions to make AI deliver better results. Here's the thing: experts tell me that a lot of accepted wisdom about prompting AI simply doesn't work. In some cases, it could even be dangerous. But the way you talk to an AI does matter, and some techniques really will make a difference.

"A lot of people think there's some magic set of words you can use that will make LLMs solve a problem," says Jules White, a computer science professor who studies generative AI at Vanderbilt University in the US. "But it's not about word choice, it's about how you fundamentally express what you're trying to do."

In 2025, a user on X (formerly Twitter) posted a tweet asking, "I wonder how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models". Sam Altman, chief executive of OpenAI, which makes ChatGPT, responded. "Tens of millions of dollars well spent," he said. "You never know."

Most people read the last line as a cheeky reference to the idea of a potential AI apocalypse, although it's hard to know how seriously to take that "tens of millions of dollars" number. But politeness is also a practical question.

LLMs work by chopping your words up into little chunks called "tokens", before analysing them using statistics to come up with an appropriate response. That means every single thing you say, from your word choice to an extra comma, will affect how the AI responds. The problem is it's unspeakably hard to predict. There's been all kinds of research looking for patterns in minor changes to AI prompts, but much of the evidence is conflicting and inconclusive.

For example, one 2024 study found that LLMs gave better and more accurate answers when they asked politely instead of just giving commands. Even weirder, there were cultural differences. Compared to Chinese and English, chatbots speaking Japanese actually did slightly worse if you got a little too courteous.

But don't rush out and buy your AI a thank you card just yet. Another small test found a previous version of ChatGPT was actually more accurate when you........

© BBC