menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

AI companies say safety is a priority. It’s not

12 0
09.07.2024

OpenAI uses hardball tactics to stifle dissent from workers about their technology, according to an open letter signed by nine current and former employees.

It could save us or it could kill us.

That’s what many of the top technologists in the world believe about the future of artificial intelligence. This is why companies like OpenAI emphasize their dedication to seemingly conflicting goals: accelerating technological progress as rapidly — but also as safely — as possible.

It’s a laudable intention but not one of these many companies seems to be succeeding.

Advertisement

Article continues below this ad

Take OpenAI, for example. The leading AI company in the world believes the best approach to building beneficial technology is to ensure that its employees are “perfectly aligned” with the organization’s mission. That sounds reasonable except what does it mean in practice?

A lot of groupthink — and that is dangerous.

As social animals, it’s natural for us to form groups or tribes to pursue shared goals. But these groups can grow insular and secretive, distrustful of outsiders and their ideas. Decades of psychological research have shown how groups can stifle dissent by punishing or even casting out dissenters. In the 1986 Challenger space shuttle explosion, engineers expressed safety concerns about the rocket boosters in freezing weather. Yet the engineers were overruled by their leadership, who may have felt pressure to avoid delaying the........

© San Francisco Chronicle


Get it on Google Play