The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction
Cancer patients cannot wait for us to perfect chatbots or AI systems. They need reliable solutions now—and not all chatbots, at least so far, are up to the task.
I often think of the dedicated and overworked oncologists I have interviewed who find themselves drowning in an ever-expanding sea of data, genomics, imaging, treatment trials, side-effect profiles, and patient co-morbidities. No human can process all of that unaided. Many physicians, in an understandable and even laudable effort to stay afloat, are turning to AI chatbots, decision-support models, and clinical-data assistants to help make sense of it all. But in oncology, the stakes are too high for blind faith in black boxes.
AI tools offer incredible promise for the future, and AI-augmented decision systems can improve accuracy. One integrated AI agent increased decision accuracy from 30.3% to 87.2% compared to the baseline of the GPT-4 model. Clinical decision AI systems in oncology already assist in treatment selection, prognosis estimates, and synthesizing patient data. In England, for example, an AI tool called "C the Signs" helped boost cancer detection in GP practices from 58.7% to 66.0%. These are encouraging steps.
Anything below 100 percent is not enough when life is at stake. Cancer patients cannot afford to wait for us to resolve the issues these technologies still have. We risk something far worse than delay; we risk bad decisions born from incomplete, outdated, or altogether fabricated information.
One of the worst issues is "AI hallucination." These are cases where the AI has been found to present false information, invented studies, nonexistent anatomical structures, and........





















Toi Staff
Gideon Levy
Tarik Cyril Amar
Stefano Lusa
Mort Laitner
Robert Sarner
Mark Travers Ph.d
Andrew Silow-Carroll
Constantin Von Hoffmeister
Ellen Ginsberg Simon