Trump wanted a less 'woke' AI company for Iran war. Enter: OpenAI.
The artificial intelligence model you’ve been using to write your emails and offer dating advice has officially struck a deal with the Pentagon.
OpenAI, the company behind ChatGPT, announced on Feb. 27 that the Department of Defense would begin using its models in classified systems. The move prompted immediate backlash, with 1.5 million users reportedly leaving the platform.
On March 2, OpenAI CEO Sam Altman said the company would be amending the agreement to include language that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” But in a meeting the next day, Altman told staff that the company had no control over how its technology was used by the government.
The possibility of spying on Americans is only one concern people should have when looking at this deal. People should be just as concerned that the United States will use systems trained by civilians to program weapons that will aid our country in a war that no one wants.
The 'opportunistic and sloppy' deal deciding Trump's war games
The deal between the Department of Defense and OpenAI came about after Anthropic, the creator of the AI model Claude, backed out of its agreement with the government when concerns arose about the possibility of Claude being used to surveil American citizens or for “autonomous weapons.”
This led President Donald Trump and his administration to label the AI company “woke” and ban it from the Pentagon – basically confirming the company’s suspicions. It also led OpenAI to swoop in to strike a deal with the Department of Defense at the last minute.
I am wary of AI as a whole and refuse to believe that any company providing this service can be “woke,” but I can acknowledge that Anthropic made the right decision. Unfortunately, it was too little, too late: Claude was still used as part of the military’s AI program against Iran.
But to OpenAI, any nefarious use of its programming was considered A-OK until it started costing them users. Even Altman acknowledged that the hasty agreement the company made appeared “opportunistic and sloppy.”
Congress can't take AI out of the military, but regulate it
Just like most U.S. constituents, I’m against the war in Iran regardless of whether AI is used in strategy. I don’t think that having humans decide where to drop bombs is ethical, but having a nonhuman entity make these decisions is even worse than the previously assumed worst-case scenario.
In war game simulations conducted by King’s College London, AI models use a nuclear option 95% of the time.
Something tells me that the unconscious computer program with zero humanity behind its coding isn't going to treat an actual wartime scenario any differently.
More than 2.5 million people have taken a pledge on the site QuitGPT to stop using OpenAI’s model. This is an important move, and one Altman is right to pay attention to.
I don’t think that ChatGPT users should settle for the explanation he has given the public, and I support any actions individuals are taking to sever ties with the company. But individual interactions can only do so much; Congress needs to take action to prevent AI models from being recklessly used by the Department of Defense.
I don’t think we can expect AI models to be abandoned by the government, but legislators can put guardrails in place to keep it from being used against American citizens.
Politicians can only do so much about Trump’s decision to strike Iran – other legislative action is a moot point. They do have the power, however, to protect American citizens from what could potentially be the biggest privacy disaster of my lifetime.
Follow USA TODAY columnist Sara Pequeño on Bluesky: @sarapequeno.bsky.social
