menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Here’s how researchers are helping AIs get their facts straight

5 1
10.02.2025

AI has made it easier than ever to find information: Ask ChatGPT almost anything, and the system swiftly delivers an answer. But the large language models that power popular tools like OpenAI’s ChatGPT or Anthropic’s Claude were not designed to be accurate or factual. They regularly “hallucinate” and offer up falsehoods as if they were hard facts.

Yet people are relying more and more on AI to answer their questions. Half of all people in the U.S. between the ages of 14 and 22 now use AI to get information, according to a 2024 Harvard study. An analysis by The Washington Post found that more than 17% of prompts on ChatGPT are requests for information.

One way researchers are attempting to improve the information AI systems give is to have the systems indicate how confident they are in the accuracy of their answers. I’m a computer scientist who studies natural language processing and machine learning. My lab at the University of Michigan has developed a new way of deriving confidence scores that improves the accuracy of AI chatbot answers. But confidence scores can only do so much.

Leading technology companies are increasingly integrating AI into search engines. Google now offers AI Overviews that appear as text summaries above the usual list of links in any search result. Other upstart search engines, such as

© The Conversation