menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI Has Its Einstein Moment. It’s Missing Everything That Came After

27 0
yesterday

AI’s risks are clear, but no institutions exist to act on them. Warnings alone cannot steer a technology this powerful.

In 1939, Albert Einstein signed a letter to President Roosevelt warning that nuclear fission could be weaponized, and that Germany might already be working on it. The letter did not stop anything on its own. What it did was name the stakes clearly enough that institutions could form around them. Within years, the Manhattan Project existed. Within a decade, arms control frameworks were being negotiated. The warning did not solve the problem, but it created the conditions under which the problem could be addressed.

That sequence – a legible threat, a warning issued, institutions mobilized – has become a kind of template for how societies imagine responding to transformative and dangerous technologies. We may call it the Einstein moment: the point at which someone with credibility and clarity names what is at stake, and the world reorganizes accordingly.

For artificial intelligence (AI), that moment has already happened, several times over. The warnings have been issued by leading figures such as Yoshua Bengio (Turing Award winner, “Godfather of AI”), Geoffrey Hinton (Turing Award winner, a Nobel laureate, former Google VP), and Dario Amodei (CEO of Anthropic), as well as through collective statements like that of the Center for AI Safety (signed by top researchers & executives), with increasing urgency and specificity. The stakes are not obscure, they are widely articulated and, at least among experts, broadly understood. What has not followed is the second half of the sequence: a commensurate institutional response. And the reason it has not followed is not a failure of courage or eloquence. These voices have been both clear and persistent. Dario Amodei, for example, expanded on these concerns in a 38-page essay in January 2026, following his earlier endorsement of the Center for AI Safety’s May 2023 statement on AI risks.

It is, rather, a failure of architecture: a mismatch between the scale and speed of technological development and the structures of governance, coordination, and regulation available to respond.

Why the First Half – the Einstein Moment – Worked Then

What made Einstein’s warning actionable was not the warning itself. It was the environment it landed in. There was a bounded technology, fission, with a specific and identifiable danger. There was a nation-state capable of receiving and acting on a warning. There was a recognizable enemy giving the threat geopolitical urgency. And there was a relatively contained scientific community whose knowledge could be mobilized and directed.

These conditions made the second half of the sequence, the institutional mobilization, structurally possible. The warning had a sender, a receiver, a subject, and a mechanism for conversion into action.

None of these conditions exist for AI today. The technology is not bounded, it is general-purpose, already widely deployed, and developing across hundreds of organizations simultaneously. The danger is not a single identifiable catastrophe but a diffuse set of compounding risks: labor displacement, epistemic erosion, algorithmic governance, autonomous weapons, synthetic media. There is no single government positioned to act, and no contained scientific community to mobilize. The warnings have a thousand senders and no clear receiver.

This is not a reason for fatalism. It is a precise diagnosis of why the institutional response has not materialized, and what kind of response might actually work.

Three Reasons the Second Half Keeps Failing

The first is that AI safety has become a competitive terrain. In Einstein’s moment, the threat was external- a foreign adversary. That gave the warning geopolitical traction. For AI, the dynamic is inverted: every major power is both a potential source of risk and a party unwilling to slow down unilaterally. What it means for a system to be safe or aligned is not merely a technical question, it is a geopolitical one. A safety standard endorsed by one bloc is viewed with suspicion by another. International coordination is not just difficult under these conditions; it is structurally disincentivized.

The second is that incentives are systematically inverted in a way Einstein’s world was not. The Manhattan Project required deliberate, centralized acceleration: the challenge was mobilizing resources and talent fast enough. The AI challenge is nearly the opposite: the technology is already accelerating across a fragmented landscape of competing actors, and the difficulty is creating any mechanism for restraint. Governments seek strategic advantage. Corporations pursue market dominance. Researchers are rewarded for capability, not caution. Slowing down carries immediate, attributable costs for individual actors, while the risks of acceleration are diffuse and distributed. No actor needs to be reckless for the aggregate outcome to be harmful. The individually rational move is collectively dangerous: a structural problem, not a moral one.

The third is that power is concentrated in a way that forecloses the kind of institutional independence that made arms control possible. A small number of private organizations are building systems that will mediate the cognitive and economic lives of billions. Those most invested in rapid deployment have the greatest influence over how the technology is governed. Those most affected, i.e., the workers, the users, the communities, have almost no voice in its direction. The regulated and the regulator are, in important respects, the same entity. Einstein wrote to a government that was structurally separate from the scientists it mobilized. That separation no longer exists in any meaningful form.

The Character of the Harm Makes This Worse

There is a further reason the Einstein analogy is instructive precisely where it breaks down. Nuclear weapons produced one catastrophic, undeniable output. Hiroshima, for all its horror, created the psychological and political conditions for arms control. The clarity of a single event compelled a response. The damage was legible, attributable, and impossible to normalize.

AI produces no equivalent event. Its harms are incremental, ambient, and individually defensible at each step. A labor market hollowed out by automation looks, at every stage, like routine economic change. An information ecosystem saturated with synthetic content does not announce itself as a crisis, it simply makes reliable knowledge gradually harder to produce and trust. A generation that habitually outsources reasoning to machines will not trigger a moment of alarm; the cognitive shift will become visible only in retrospect, if at all.

This is the condition under which irreversible structural change becomes most likely. Institutions built to respond to acute crises are poorly equipped to detect and arrest slow-moving ones. By the time the damage is undeniable, much of it will be locked in. There will be no Hiroshima moment for AI, no single event that forces collective action. Which means waiting for one is not a strategy. It is an abdication.

What the Second Half Actually Requires

If the Einstein analogy still has something to teach, it is this: the warning was never sufficient on its own. What mattered was the institutional architecture that formed around it. The Manhattan Project was not a warning; it was a mobilization. Arms control treaties were not statements of concern; they were binding frameworks negotiated between adversaries and verified by mechanisms that did not depend on good faith.

Building the equivalent for AI requires several things that are not currently happening at scale.

Governance needs to be treated with the seriousness of financial regulation, not as a reaction to crisis, but as a standing system designed to prevent one. That means ongoing monitoring, mandatory disclosure, independent verification, and enforcement mechanisms with genuine authority. What exists today is largely voluntary, fragmented, and reactive.

International coordination must proceed even where trust is thin. The model is adversarial cooperation – arms control, not diplomatic consensus. Agreements that bind parties with conflicting interests, verified by mechanisms that do not depend on goodwill. The international scientific community, operating across borders with shared methodological standards, could play a critical epistemic role: not building the technology, but assessing its consequences rigorously enough that governments cannot credibly claim ignorance.

Power must be redistributed within the development ecosystem. Genuine representation for affected communities in governance processes: not as procedural courtesy, but as a structural check on capture. The current concentration, in which a handful of private organizations effectively set global norms by default, is not a stable or legitimate foundation for governing a general-purpose technology.

The Danger of Performing the First Half Indefinitely

The Einstein moment, properly understood, was never just about the warning. It was about what the warning made possible. A letter that led nowhere – that circulated, generated concern, and dissolved – would not be remembered as a moment at all.

That is precisely the risk now. Awareness of AI’s risks is widespread. The warnings have been issued, heard, and extensively discussed. What remains missing is the conversion of that awareness into institutional architecture capable of acting on it. And the history of other slow-moving structural crises, such as financial instability or climate change, suggests a consistent and sobering pattern: recognition spreads, concern is voiced, and the underlying dynamics continue largely undisturbed. Awareness can become a substitute for action, a way of performing seriousness that absorbs the energy that might otherwise produce change.

Einstein’s letter mattered because it was followed by something. The question for AI is whether the warnings already given will be followed by something too, or whether they will be remembered, if at all, as the first half of a sequence that never completed.


© The Times of Israel (Blogs)