menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Books / The AI apocalypse is the least of our worries

1 0
wednesday

What is your p(doom)? This is the pseudo-scientific manner in which some people express the strength of their belief that an artificial superintelligence running on computers will, in the coming decades, kill all humans. If your p(doom) is 0.1, you think it 10 per cent likely. If your p(doom) is 0.9, you’re very confident it will happen.

Well, maybe ‘confident’ isn’t the word. Those who have a high p(doom) and seem otherwise intelligent argue that there’s no point in having children or planning much for the future because we are all going to die. One of the most prominent doomers, a combative autodidact and the author of Harry Potter fan-fiction named Eliezer Yudkowsky, was recently asked what advice he would give to young people. He replied: ‘Don’t expect a long life.’

Expressing such notions as probabilities between 0 and 1 makes them sound more rigorous, but assigning numerical likelihoods to one-off potential catastrophes is more like a game of blindfold darts: no one agrees on how such figures should be calculated. Just as no one actually knows how to build an artificial superintelligence or understands how one, if it were possible, would behave, despite reams of science-fictional argumentation by Yudkowsky and others. Everyone’s just guessing, and going off the vibes they get from interacting with the latest chatbot.

The AI doomers are the subject of too many chapters in Tom Ough’s book, which traces the career of one of their godfathers, the philosopher Nick Bostrom and his........

© The Spectator