menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Why OpenAI’s open-source models matter

5 0
07.08.2025

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

OpenAI is opening up again.

The company’s release of two “open-weight” models—gpt-oss-120b and gpt-oss-20b—this month marks a major shift from its 2019 pivot away from transparency, when it began keeping its most advanced research under wraps after a breakthrough in model scaling and compute. Now, with GPT-5 on the horizon, OpenAI is signaling a return—at least in part—to its original ethos.

These new models come with all their internal weights exposed, meaning developers can inspect and fine-tune how they work. That doesn’t make them “open-source” in the strictest sense—the training data and source code remain closed—but it does make them more accessible and adaptable than anything OpenAI has offered in years.

The move matters, not just because of the models themselves, but because of who’s behind them. OpenAI is still the dominant force in generative AI, with ChatGPT as its flagship consumer product. When a leader of that stature starts releasing open models, it sends a signal across the industry. “Open models are here to stay,” says Anyscale cofounder Robert Nishihara. “Now that OpenAI is competing on this front, the landscape will get much more competitive and we can expect to see better open models.”

Enterprises—especially ones in regulated industries like healthcare or financial—like to build on

© Fast Company