AI is coming for the laptop class
My entire job takes place on my laptop.
I write stories like this in Google Docs on my laptop. I coordinate with my editor in Slack on my laptop. I reach out to sources with Gmail and then interview them over Zoom, on my laptop. This isn’t true of all journalists — some go to war zones — but it’s true of many of us, and for accountants, tax preparers, software engineers, and many more workers, maybe over one in 10, besides.
Laptop jobs have many charms: the lack of a commute or dress code, the location flexibility, the absence of real physical strain. But if you’re a laptop worker and not worried about what’s coming in the next decade, you haven’t been paying attention. There is no segment of the labor market more at risk from rapid improvements in AI than us.
The newest “reasoning models” from top AI companies are already essentially human-level, if not superhuman, at many programming tasks, which in turn has already led new tech startups to hire fewer workers. Generative AIs like Dall-E, Sora, or Midjourney are actively competing with human visual artists; they’ve already noticeably reduced demand for freelance graphic design.
Services like OpenAI’s Deep Research are very good at internet-based research projects like, say, digging up background information for a Vox piece. “Agentic” AIs like Operator are able to coordinate and sequence these kinds of tasks the way a good manager might. And the rapid pace of progress in the field means that laptop warriors can’t even take comfort in the fact that current versions of these programs and models may be janky and buggy. They will only get better from here, while we humans will stay mostly the same.
As AIs have improved at laptop job tasks, progress on more physical work has been slower. Humanoid robots capable of tasks like folding laundry have been a longtime dream, but the state-of-the-art falls wildly short of human level. Self-driving cars have seen considerable progress, but the dream has proven harder to achieve than boosters thought. While AI has been improving rapidly, robotics — the ability of AI to work in the physical world — has been improving much more slowly. At this point, a robot plumber or maid is far harder to imagine than a robot accountant or lawyer.
Let me offer, then, a thought experiment. Imagine we get to a point — maybe in the next couple years, maybe in 10, maybe in 20 — when AI models can fully substitute for any remote worker. They can write this article better than me, make YouTube videos more popular than Mr. Beast’s, do the work of an army of accountants, and review millions of discovery documents for a multibillion-dollar lawsuit, all in a matter of minutes. We would have, to borrow a phrase from AI writer and investor Leopold Aschenbrenner, “drop-in remote workers.” How does that reshape the US, and world, economy?
Right now this is a hypothetical. But it’s a hypothetical worth taking seriously — seriously enough that I may or may not be visiting the International Brotherhood of Electrical Workers’ apprenticeship application most days, just in case I need work that requires a human body.
Fast AI progress, slow robotics progress
If you’ve heard of OpenAI, you’ve heard of its language models: GPTs 1, 2, 3, 3.5, 4, and most recently 4.5. You might have heard of their image generation model DALL-E or video generation model Sora.
But you probably haven’t heard of their Rubik’s cube solving robot. That’s because the team that built it was disbanded in 2021, about a year before the release of ChatGPT and the company’s explosion into public consciousness.
OpenAI engineer Wojciech Zaremba explained on a podcast that year that the company had determined there was not enough real-world data of how to move in the real world to keep making progress on the robot. Two years of work, between 2017 and 2019, was enough to get the robot hand to a point where it could unscramble Rubik’s Cubes successfully 20 to 60 percent of the time, depending on how well-scrambled the Cube was. That’s … not especially great, particularly when held up next to OpenAI’s language models, which even in earlier versions seemed capable of competing with humans on certain tasks.
It’s a small story that encapsulates a truism in the AI world: the physical is lagging the cognitive. Or, more simply, the chatbots are beating the robots.
This is not a new observation: It’s called Moravec’s paradox, after the futurist Hans Moravec, who famously observed that computers tend to do poorly at tasks that are easy for humans and do well at tasks that are often hard for humans.
Why? Here we’re less sure. As the machine learning researcher Nathan........
© Vox
