menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

You can’t recall AI like a defective drug

4 0
12.03.2026

You can’t recall AI like a defective drug

Why pharma-style governance doesn’t work for tech.

[Source Image: Pixabay]

At a recent AI summit in New Delhi, Sam Altman warned that early versions of superintelligence could arrive by 2028, that AI could be weaponized to create novel pathogens, and that democratic societies need to act before they are overtaken by the technology they have built. These concerns are widely shared across the industry. Geoffrey Hinton, the Nobel laureate known as “the godfather of AI,” has warned that creating digital beings more intelligent than ourselves poses a genuine existential threat. Mustafa Suleyman, CEO of Microsoft AI, devoted much of his book The Coming Wave to the argument that AI’s fusion with synthetic biology could put the tools to engineer a deadly pandemic within reach of a single individual. These are not warnings about a distant future. Last week, a clash over who controls AI and on what terms led to a complete collapse in the company’s relationship with the Pentagon.

When politicians and business leaders try to make sense of issues like these, they are often tempted to look to the pharmaceutical industry for a regulatory model. Senator Richard Blumenthal—one of the few legislators actively pushing for meaningful AI regulation—has proposed that the way the U.S. government regulates the pharmaceutical industry can serve as a model for AI oversight. The analogy makes intuitive sense. The pharma model shows that strict licensing and oversight of potentially dangerous emerging technologies can limit threats without placing undue restrictions on innovation.

The instinctive attraction of this approach isn’t confined to legislators. Many companies are applying the same logic internally—whether consciously or not—managing AI risk through stage-gate reviews, pre-deployment testing, and post-launch monitoring. The pharma model, in other words, is already the de facto governance framework for much of the industry. The problem is that it’s the wrong framework—and the differences are not just technical but existential.

Ready to thrive at the intersection of business, technology, and humanity?

Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.

Three disanalogies that matter

Pharmaceutical regulation works because the barriers to entry are high, the product is physical and controllable, and the development cycle is slow enough for oversight to keep pace. None of these conditions hold for AI.

First, barriers to entry are very different. Bringing a new drug to market costs an average of $1.1 billion, according to a 2020 study published in the Journal of the American Medical Association. The infrastructure alone—laboratories, clinical trial networks, manufacturing facilities—limits production to a relatively small number of identifiable companies that regulators can monitor. AI has no equivalent friction. Capable models can be built for a fraction of that cost, fine-tuned on consumer hardware, and deployed globally from a laptop. The universe of actors a regulator would need to track is not a handful of identifiable companies—it is potentially anyone, anywhere.

Second, a pharmaceutical product is physical. Manufacturing it requires raw materials, specialized equipment, and distribution logistics. All of this creates friction that regulators can exploit by imposing oversight checkpoints. But code has no such friction. Once released, an AI model’s weights can be copied number-for-number and shared across borders far more quickly than any physical weapon or industrial system. Its marginal cost of replication is effectively zero. And you cannot recall software the way you recall a contaminated drug. Once it is in the wild, it stays in the wild.

Even capabilities that are delivered purely through access to the cloud are vulnerable to replication and thus to the breaking of corporate or regulatory guardrails. In just the last month, Anthropic disclosed that three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—had used 24,000 accounts to generate over 16 million exchanges with Claude, extracting its most advanced capabilities through a technique called distillation. The Chinese labs did not need to infiltrate a supply chain or build expensive factories. They only needed API access and carefully crafted prompts, routed through proxy networks designed to evade detection. There is no pharmaceutical equivalent of this replicability.

Artificial Intelligence

Claire's went from tween mall icon to bankrupt — twice?


© Fast Company