AI hallucinations: a budding sentience or a global embarrassment?
In a farcical yet telling blunder, multiple major newspapers, including the Chicago Sun-Times and Philadelphia Inquirer, recently published a summer-reading list riddled with nonexistent books that were “hallucinated” by ChatGPT, with many of them falsely attributed to real authors.
The syndicated article, distributed by Hearst’s King Features, peddled fabricated titles based on woke themes, exposing both the media’s overreliance on cheap AI content and the incurable rot of legacy journalism. That this travesty slipped past editors at moribund outlets (the Sun-Times had just axed 20% of its staff) underscores a darker truth: when desperation and unprofessionalism meets unvetted algorithms, the frayed line between legacy media and nonsense simply vanishes.
The trend seems ominous. AI is now overwhelmed by a smorgasbord of fake news, fake data, fake science and unmitigated mendacity that is churning established logic, facts and common sense into a putrid slush of cognitive rot. But what exactly is AI hallucination?
AI hallucination occurs when a generative AI model (like ChatGPT, DeepSeek, Gemini, or DALL·E) produces false, nonsensical, or fabricated information with high confidence. Unlike human errors, these mistakes stem from how AI models generate responses by predicting plausible patterns rather than synthesizing established facts.
There are several reasons why AI generates wholly incorrect information. It has nothing to do with the ongoing fearmongering over AI attaining sentience or even acquiring a soul.
Training on imperfect data: AI learns from vast datasets replete with biases, errors, and inconsistencies. Prolonged training on these materials may result in the generation of myths, outdated facts, or conflicting sources.
Over-optimization for plausibility: Contrary to what some experts claim, AI is nowhere near attaining “sentience” and therefore cannot discern “truth.” GPTs in particular are giant planetary-wide neural........
© RT.com
