The Tragic Flaw in AI
One of the strangest things about large language models is not what they get wrong, but what they assume to be correct. LLMs behave as if every question already has an answer. It's as if reality itself is always a kind of crossword puzzle. The clues may be hard, the grid may be vast and complex, but the solution is presumed to exist. Somewhere, just waiting to be filled in.
When you ask a large language model something, it doesn’t encounter an open unknown. It encounters an incomplete pattern. Its job isn't to ponder this uncertainty, but to complete the shape. It moves forward because that's the only thing it knows how to do.
Humans experience not-knowing very differently. We experience, if not wallow, in hesitation and doubt. We feel........
