menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Anthropic is fighting with a big client, and it’s actually good for its brand

15 19
20.02.2026

Can a headline-making squabble with a client actually be good for a brand? This week’s dispute between the Department of Defense and Anthropic, a high-profile player in the super-competitive field of artificial intelligence, may be just that. 

The dispute involves whether the Pentagon, which has an agreement to use Anthropic technology, can apply it in a wider range of scenarios: all “lawful use” cases. Anthropic has resisted signing off on some potential scenarios, and the Pentagon has essentially accused it of being overly cautious. As it happens, that assessment basically aligns with Anthropic’s efforts (most recently via Super Bowl ads aimed squarely at prominent rival OpenAI) to burnish a reputation as a thoughtful and considered AI innovator. At a moment when the pros-vs.-cons implications and potential consequences of AI are more hotly debated than ever, Anthropic’s public image tries to straddle the divide.

Presumably Anthropic (best known to consumers for its AI chat tool Claude) would prefer to push that reputation without alienating a lucrative client. But the underlying feud concerns how the military can use Anthropic’s technology, with the company reportedly seeking limits on applications involving mass surveillance and autonomous weapons. A Pentagon spokesman told Fast Company that the military’s “relationship with Anthropic is being reviewed,” adding: “Our nation requires that our partners be willing to help our warfighters win in any fight.” The department has reportedly threatened to label Anthropic a “supply chain risk,” lumping it in with supposedly “woke” tech companies, causing potential problems not just for Anthropic but for partners like Palintir.

So far Anthropic’s basic stance amounts to: This is a uniquely potent technology whose eventualities we don’t fully comprehend, so there are limits to uses we’ll currently permit. Put more bluntly: We are not reckless.

Subscribe to the Design newsletter.The latest innovations in design brought to you every weekday

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

SIGN UP

Privacy PolicyFast Company Newsletters

Not moving so fast that you break important things—like user trust, or civilization—is a message that’s of a piece with the official image Anthropic has sought to cultivate. The company was founded by OpenAI refugees who argued back in 2021 that the company was prioritizing monetization over safety. Its recent Super Bowl ads are the highest-profile example of this branding so far: directly mocking OpenAI for experimenting with advertising on its consumer-facing product ChatGPT, and presenting the results as a slop-dystopian mess.

The spots were, as Fast Company’s Jeff Beer explained, a rare example of straight-up “ire slung at a category competitor.” They could arguably be the first salvo in a branding battle akin to Apple vs. Microsoft, with Anthropic seizing the role of righteous challenger. (OpenAI’s initial response included belittling Anthropic’s business, which just lends to the latter’s underdog pose.)

As a brand image to shoot for, being the responsible AI player is an understandable goal. The technology has been divisive for years at this point, and lately that’s reached a crescendo. Seen by many as a threat to privacy, a job-killer, an environmental menace, and a source of endless misinformation and slop, it’s simultaneously touted by Silicon Valley elites and their intellectual brethren as an unprecedented boon to humanity. 

Expand to continue reading ↓


© Fast Company