Why AI Won't (Plausibly) Kill Everyone
A new book about AI has a provocative title: If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Eliezer Yudkowsky and Nate Soares argue that the development of artificial intelligence that exceeds human intelligence will almost certainly lead to the extinction of our species. How plausible is the scenario that they think will lead to the death of all people?
The extinction scenario can be summarized by the following steps.
How likely are the steps in this scenario?
I asked four AI models (ChatGPT, Grok, Claude, and Gemini) to evaluate this scenario, and found the answers to be highly insightful. The models' answers were in agreement that the least plausible step is #4, that superintelligent computers will eventually want to get rid of humans. Here are some reasons (based on my interpretation of the AI models'........
© Psychology Today
