menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Can the Military Prevent AI From Going Full Terminator?

38 0
02.03.2026

It has been a momentous week for AI in the military. First, the Pentagon announced it would soon cut its ties with Anthropic and its leading model, Claude — and, in an extreme tactic, label the company a “supply chain risk” — after negotiations broke down over Anthropic’s condition that its systems not be used for autonomous warfare or mass surveillance. Hours later, OpenAI, sensing an opportunity, struck a deal with the Pentagon — though it claimed to retain the same carveouts Anthropic wanted. Then, on Saturday, the U.S. attacked Iran in a broad and ongoing campaign that killed Ayatollah Khamenei and many other top officials. Per the Wall Street Journal, U.S. Central Command has been using Claude during the operation.

To get a better understanding of how the U.S. military is actually employing AI tools in warfare now, I talked with Emelia Probasco, a senior Fellow at Georgetown’s Center for Security and Emerging Technology. Probasco, a former Surface Warfare Officer in the U.S. Navy, leads the Center’s research team on the application of artificial intelligence and machine learning to national security challenges. We spoke just before the Anthropic/Claude deal fell apart, and the morning before bombs began falling in Tehran.

You were quoted in the New York Times as saying that the Pentagon/Anthropic deal “needs” to happen. I was wondering why you were so forceful on this point — what did you think was so essential about this particular partnership?Anthropic had been the only company with a large language model operating on the classified networks. And most of what the military needs to do on a day-to-day basis is happening at a classified or above level. So removing that one tool right now, when — as anyone can see, there are operations going on around the world — it’s not a great time. It’s never going to be a good time, but people are using it now and it would just be very disruptive to what they’re trying to do.

What do you make of the conflict between Anthropic and the Pentagon? Defense Secretary Pete Hegseth went off on CEO Dario Amodei in harsh and personal terms.Let me talk about what I think matters on this issue beyond the media and the back and forth. The military has a difficult job to do, and there are lots of operators, good Americans, who are trying to live up to American ideals and what we expect of the military. And they want to use these tools for safe applications. On the other hand, the AI companies have an exquisite knowledge of the technology they have developed, and they know what it’s great at, and they also know that it’s just not perfect. They’re voicing realistic and well-earned concerns about the limitations of the technology. And I think sides want to support national security; they just come at it from different perspectives. And that’s why I wish they would keep talking, keep working together to find where the common ground is and better articulate where there are real differences and how to resolve them. One of the things that has gotten pushed aside in this bigger conversation about the company and the Pentagon is that if you look at the contractual terms, you would say, “Gosh, they’re pretty close.” The Pentagon was saying there would be no unlawful use of the technology, and Anthropic was asking for no mass surveillance and autonomous weapons.

There are laws on the books that deal with mass surveillance and population surveillance and autonomous weapons, but there’s some........

© Daily Intelligencer