menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

OpenAI’s new safety tools are designed to make AI models harder to jailbreak. Instead, they may give users a false sense of security.

3 0
05.11.2025
OpenAI has open-sourced two AI safety classifiers that let enterprises more easily set their own guardrails. Experts say the move could improve transparency but also create new risks.

© Fortune