OpenAI’s Pentagon deal once again calls Sam Altman’s credibility into question
OpenAI’s Pentagon deal once again calls Sam Altman’s credibility into question
Altman was supportive of Anthropic in its contract dispute with the Pentagon, but all the while he was positioning his company to supplant its rival’s AI models with its own.
[Source Photo: Getty Images]
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.
Familiar tensions around Sam Altman
OpenAI CEO Sam Altman voiced his support for Anthropic in its dispute with the Pentagon over the use of its AI for targeting autonomous weapons and in domestic mass surveillance. He did so in a company meeting and during a CNBC Squawk Box appearance last Friday, the day Anthropic was effectively blacklisted by the Trump administration.
But two days earlier, on Wednesday, Altman had reportedly already begun talking to the Pentagon about a contract that would let OpenAI effectively replace Anthropic as the sole supplier of AI models for classified information. The day after Anthropic missed its “deadline” for agreeing to the Pentagon’s terms, Altman announced on X that his company had reached an agreement with the Pentagon to provide AI for the same classified work. He added that the contract emphasized that the Pentagon wouldn’t use its AI for autonomous weapons or domestic mass surveillance.
Altman explained on X that the contract contained guarantees that OpenAI models wouldn’t be used for autonomous weapons or mass surveillance. It seemed odd that OpenAI’s lawyers would be able to do that on such a tight timeline, while Anthropic’s lawyers weren’t able to do so over the weeks the company spent negotiating with the Pentagon. Altman seemed to try to explain it away in a March 1 tweet: “I think Anthropic may have wanted more operational control than we did,” he wrote. (Anthropic CEO Dario Amodei, for his part, said during a company meeting that OpenAI’s negotiations with the Pentagon amounted to “safety theater,” according to The Information.)
In an internal memo that Altman tweeted this week, he acknowledged that rushing to get a deal done with the Pentagon on the same day Anthropic lost its deal was a bad look. “The issues are super complex, and demand clear communication,” he wrote. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
All of this strongly suggests that OpenAI simply accepted the same or similar alternative contract language the Pentagon offered Anthropic at the eleventh hour—language that promised, in a completely non-binding way, not to use the AI for autonomous weapons or mass surveillance.
On Monday night, Altman said on X that the Pentagon had agreed to add more explicit language rooted in existing U.S. laws stating that OpenAI’s models wouldn’t be used for domestic surveillance. But didn’t Anthropic object to the Pentagon’s desire to use AI models for domestic surveillance programs already permitted under existing laws?
Artificial Intelligence
department of defense
Claire's went from tween mall icon to bankrupt — twice?
