What the Anthropic blow-up says about constitutional rights in an era of AI
What the Anthropic blow-up says about constitutional rights in an era of AI
Last Friday, President Trump announced on Truth Social that he was ordering U.S. government agencies to “immediately cease” using “Anthropic’s products,” including its chatbot Claude. On X, Defense Secretary Pete Hegseth directed the Pentagon to designate Anthropic a “supply-chain risk to national security.”
One day later, Sam Altman, CEO of Anthropic’s main rival, OpenAI, announced that his company will give the Pentagon full access to its ChatGPT systems for “any lawful purpose.” Previously, OpenAI’s federal contract had been confined to non-classified uses, like routine office tasks.
Trump’s embrace of OpenAI over Anthropic is telling, as it mirrors his administration’s widespread renunciation of the values underlying the U.S. Constitution itself.
Anthropic was founded in 2021 by seven employees who left OpenAI because, according to CEO Dario Amodei, “they believed in the need for ‘alignment or safety.’” Anthropic has since developed a written “Constitution” for Claude, which defines the company’s “training process, and its content directly shapes Claude’s behavior.”
Claude’s constitutional values include “not undermining appropriate human mechanisms to oversee AI;” “being honest;” “avoiding actions that are inappropriate, dangerous, or harmful;” and helping “users and the world.”
These values matter. As Amodei explained in a recent essay, “there is now ample evidence, collected over the past few years, that AI systems are unpredictable and difficult to control — we’ve seen behaviors as varied as obsessions, sycophancy, laziness, deception, blackmail, scheming, ‘cheating’ by hacking software environments, and much more.”
Amodei posits that by 2027, we may have an AI system that is “more capable than any Nobel Prize winner, statesmen, or technologist.” But “[i]s it hostile, or does it share our values? Could it militarily dominate the world through superior weapons, cyber operations, influence operations, or manufacturing?”
These are the kinds of concerns that led Anthropic to pump the brakes with the Pentagon. It signed a $200 million contract with the Pentagon in July, which Trump says will now be phased out over six months.
According to Amodei, the company “cannot in good conscience” compromise on two things: the military’s use of its AI models to power fully autonomous weapons or its use of AI to conduct mass surveillance of Americans.
Trump erupted over this on Truth Social: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.”
Hegseth said on X that “Anthropic’s stance is fundamentally incompatible with American principles.”
The opposite is true. Although the First Amendment protects freedom of speech, assembly, and religion, AI has dramatically expanded the capacity for surveillance of those things.
Up until now, to create an individual dossier, human data analysts had to watch video footage, listen to recorded communications, and review emails, texts and documents collected over countless hours. But AI does it in seconds. It can identify faces, voices, emotions, and even gaits from CCTV, body cams, online photos and billions of social media posts. It predicts behavior, location, interests, purchases, relationships, financial transactions, health, even the tendency to commit crimes.
The Constitution is no match for such power.
Under the Fourth Amendment, the Supreme Court established in 1967 that it’s unconstitutional for the government to conduct a warrantless search and seizure if a person has a reasonable expectation of privacy — not while circulating in public. It has also held that handing over personal information to a third party, like a social media company, waives any Fourth Amendment protections.
In 2017, the court recognized that its precedents do not adequately address expectations of privacy in the digital age, holding that the government needs a warrant before using cell site records to track movements and that the third-party waiver does not apply. This term, in a case called Chatrie v. U.S, it will consider the constitutionality of what are called “geofence warrants,” which enable law enforcement to collect cell phone data from devices within a geographic area and search hundreds of millions of accounts to ultimately identify names and email addresses associated with devices deemed relevant to an investigation.
So for now, the Trump administration’s claim that it will use OpenAI only for “lawful purposes” means it can pretty much do what it wants — because the law remains unclear. Courts cannot keep up with the technology.
AI also poses problems under the Fifth and Fourteenth Amendment, which enshrine due process protections and equal protection under the laws. Already, algorithmic decision-making is influencing important governmental decisions like access to government benefits, bail, sentencing, parole, hiring, unemployment insurance, and lending. How can someone defend against a thought process that only exists inside a digital black box?
The Constitution generally does not bind private conduct — it is a mechanism to keep the government accountable to the people. Yet AI is not democratically elected. If it supplants Congress when it comes to making laws or declaring war, the president when it comes to battlefield decisions, or judges when it comes to deciding who wins disputes based on detailed surveillance data, the Constitution will cease to be our governing charter. What will replace it is anyone’s guess — perhaps only Sam Altman will know.
Kimberly Wehle is a professor at the University of Baltimore School of Law and author of “How to Read the Constitution — and Why,” as well as “What You Need to Know About Voting — and Why” and “How to Think Like a Lawyer — and Why.”
Copyright 2026 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
More Opinions - Technology News
Watch live: Walz, Ellison testify before House on Minnesota fraud
Noem’s spending review has held up more than 1,000 FEMA contracts, grants and ...
Ted Cruz, Tim Scott asking Treasury to approve $200B tax cut without ...
Canadian PM Carney says US and Israel’s strikes on Iran mark ‘failure of ...
Talarico beats Crockett in Texas Democratic Senate primary
Talarico beats Crockett in Texas as GOP heads to runoff: Key takeaways
Crockett concedes to Talarico, says Texas ‘primed to turn blue’
Hegseth, Caine preview major gravity-bombing campaign on Iran
Noem fends off attacks from left and right in heated hearing: 5 takeaways
House Ethics Committee opens probe into Gonzales amid affair allegations
US submarine sinks Iranian warship with torpedo; video shows explosion
Live results: Crockett, Talarico seek Democratic nod in Texas Senate race
Watch live: Noem facing House questioning over DHS oversight
Live updates: Talarico prevails over Crockett in Texas; Walz, Ellison testify ...
‘Breathtaking’: Independent senator deplores Rubio’s explanation for Iran ...
Gallego says he’d vote to fund ‘dumb’ Iran war
Hegseth criticizes media for making US deaths in Iran war front-page news
Menefee, Green’s Democratic primary battle heads to runoff
The Hill Podcasts – Morning Report
