Human-level AI is not inevitable. We have the power to change course
“Technology happens because it is possible,” OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb.
Altman captures a Silicon Valley mantra: technology marches forward inexorably.
Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity.
For countless other species, the arrival of humans spelled doom. We weren’t tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms.
Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it’s natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It’s just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called “the last invention that man need ever make”. Besides, the reasoning goes within AI labs, if we don’t, someone else will do it – less responsibly, of course.
A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI’s inevitability is a consequence of the second law of thermodynamics and that its engine is “technocapital”. The e/acc manifesto asserts: “This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.”
For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it’s not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we’ve done it before.
No technology is inevitable, not even something as tempting as AGI.
Some AI worriers like to point out the times humanity resisted and restrained valuable technologies.
Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s.
No human has been reproduced via © The Guardian
