Are we in the middle of an extinction panic?
If you’ve followed the news in the last year or two, you’ve no doubt heard a ton about artificial intelligence. And depending on the source, it usually goes one of two ways: AI is either the beginning of the end of human civilization, or a shortcut to utopia.
Who knows which of those two scenarios is nearer the truth, but the polarized nature of the AI discourse is itself interesting. We’re in a period of rapid technological growth and political disruption and there are many reasons to worry about the course we’re on — that’s something almost everyone can agree with.
But how much worry is warranted? And at what point should worry deepen into panic?
To get some answers, I invited Tyler Austin Harper onto The Gray Area. Harper is a professor of environmental studies at Bates College and the author of a fascinating recent essay in the New York Times. The piece draws some helpful parallels between the existential anxieties today and some of the anxieties of the past, most notably in the 1920s and ’30s, when people were (rightly) terrified about machine technology and the emergence of research that would eventually lead to nuclear weapons.
Below is an excerpt of our conversation, edited for length and clarity. As always, there’s much more in the full podcast, so listen to and follow The Gray Area on Apple Podcasts, Google Podcasts, Spotify, Stitcher, or wherever you find podcasts. New episodes drop every Monday.
Sean Illing
When you track the current discourse around AI and existential risk, what jumps out to you?
Tyler Austin Harper
Silicon Valley’s really in the grip of kind of a science fiction ideology, which is not to say that I don’t think there are real risks from AI, but it is to say that a lot of the ways that Silicon Valley tends to think about those risks come through science fiction, through stuff like The Matrix and the concern about the rise of a totalitarian AI system, or even that we’re potentially already living in a simulation.
I think something else that’s really important to understand is what an existential risk actually means according to scholars and experts. An existential risk doesn’t only mean something that could cause human extinction. They define existential risk as something that could cause human extinction or that could prevent our species from achieving its fullest potential.
So something, for example, that would prevent us from colonizing outer space or creating digital minds, or expanding to a cosmic civilization — that’s an existential risk from the point of view of people who study this and also from the point of view of a lot of people in Silicon Valley.
So it’s important to be careful that when you hear people in Silicon Valley say AI is an existential risk, that doesn’t necessarily mean that they think it could cause human extinction. Sometimes it does, but it could also mean that they worry about our human potential being curtailed in some way, and that gets in wacky territory really quickly.
Sean Illing
One of the interesting things about the AI discourse is its all-or-nothing quality. AI will either destroy humanity or spawn utopia. There doesn’t seem to be much space for anything in between. Does that kind of polarization surprise you at all, or is that sort of par........
© Vox
visit website