AI doesn’t have to reason to take your job
In 2023, one popular perspective on AI went like this: Sure, it can generate lots of impressive text, but it can’t truly reason — it’s all shallow mimicry, just “stochastic parrots” squawking.
At the time, it was easy to see where this perspective was coming from. Artificial intelligence had moments of being impressive and interesting, but it also consistently failed basic tasks. Tech CEOs said they could just keep making the models bigger and better, but tech CEOs say things like that all the time, including when, behind the scenes, everything is held together with glue, duct tape, and low-wage workers.
It’s now 2025. I still hear this dismissive perspective a lot, particularly when I’m talking to academics in linguistics and philosophy. Many of the highest profile efforts to pop the AI bubble — like the recent Apple paper purporting to find that AIs can’t truly reason — linger on the claim that the models are just bullshit generators that are not getting much better and won’t get much better.
But I increasingly think that repeating those claims is doing our readers a disservice, and that the academic world is failing to step up and grapple with AI’s most important implications.
I know that’s a bold claim. So let me back it up.
“The illusion of thinking’s” illusion of relevance
The instant the Apple paper was posted online (it hasn’t yet been peer reviewed), it took off. Videos explaining it racked up millions of views. People who may not generally read much about AI heard about the Apple paper. And while the paper itself acknowledged that AI performance on “moderate difficulty” tasks was improving, many summaries of its takeaways focused on the headline claim of “a........
© Vox
