menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI anxiety is turning volatile

3 0
yesterday

AI anxiety is turning volatile

After a Molotov attack on Sam Altman’s home and threats against OpenAI, a fringe but intensifying strain of AI fear is spilling into the real world.

[Photos: Tayfun Coskun/Anadolu via Getty Images; Anna Moneymaker/Getty Images]

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

Is the Altman firebomb just the start of extreme doomer violence?

On April 10, someone threw a molotov cocktail at OpenAI CEO Sam Altman’s house in San Francisco. The alleged assailant, 20-year-old Daniel Moreno-Gama, didn’t stop there. He then went to OpenAI’s headquarters and told the security guards there that he intended to burn down the building and everyone inside. Two days later, someone allegedly fired two shots from a car driving past Altman’s house, but OpenAI said that event was unrelated to the firebombing and didn’t target Altman. 

The firebombing is an extreme reaction to the rapid evolution of AI systems over the past few years, and to fears that such systems may not act in humans’ best interests. Moreno-Gama said as much in the “manifesto” document police found in his possession. He discusses the “purported risk AI poses to humanity” and “our impending extinction.” He includes a personal letter to Altman, in which he urges the CEO to change. He also advocates for killing CEOs of other AI companies and their investors. 

Altman has spoken many times about the dangers of AI systems while also pushing OpenAI to develop and release increasingly intelligent models. Some have suggested that when Altman talks about the dangers of AI, it’s really a sort of humble-brag about OpenAI’s models (“so intelligent they’re dangerous”).

It’s true that AI labs continue to make big strides in intelligence with every new model. AI coding tools are speeding up development, so new releases, and jumps in capability, are happening more frequently. Meanwhile, the public has grown increasingly concerned, even angsty, about the risks of AI systems, which can range from job losses to AI-assisted cybercrime to human extinction. AI’s transformation of business and life is just getting underway. Models will grow scarily smart. With AI labs under pressure to deliver returns for their investors, there’s almost no chance of hitting “pause.” There’s little reason to think incidents like the Altman firebombing won’t happen again. 

Sarah Federman, a professor of conflict resolution at the University of San Diego, says that people often resort to violence when they feel powerless to speak out effectively against a perceived wrong. “We’re starting to see the breaking point,” Federman says. “There is all of this fear and nowhere for it to go.” She also believes that as AI labs race to release the best model, concerns about ethics have been pushed aside.

She’s got a point. AI companies have spent significant time engaging with lawmakers, explaining how their systems work and why regulating model development can be counterproductive. Many in Washington, D.C., were charmed by Altman, who they found forthright, earnest, and technically proficient. But these companies spend far less time speaking directly to the public. They don’t hold town halls or host AI ethics debates on Fox News or CNN. They’re more likely to start “institutes” to study the future effects of AI on society.

Artificial Intelligence

Meet Kyoto: the typeface that bleeds (on purpose)


© Fast Company