menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

AI Has Its Einstein Moment. It’s Missing Everything That Came After

66 0
18.03.2026

AI’s risks are clear, but no institutions exist to act on them. Warnings alone cannot steer a technology this powerful.

In 1939, Albert Einstein signed a letter to President Roosevelt warning that nuclear fission could be weaponized, and that Germany might already be working on it. The letter did not stop anything on its own. What it did was name the stakes clearly enough that institutions could form around them. Within years, the Manhattan Project existed. Within a decade, arms control frameworks were being negotiated. The warning did not solve the problem, but it created the conditions under which the problem could be addressed.

That sequence – a legible threat, a warning issued, institutions mobilized – has become a kind of template for how societies imagine responding to transformative and dangerous technologies. We may call it the Einstein moment: the point at which someone with credibility and clarity names what is at stake, and the world reorganizes accordingly.

For artificial intelligence (AI), that moment has already happened, several times over. The warnings have been issued by leading figures such as Yoshua Bengio (Turing Award winner, “Godfather of AI”), Geoffrey Hinton (Turing Award winner, a Nobel laureate, former Google VP), and Dario Amodei (CEO of Anthropic), as well as through collective statements like that of the Center for AI Safety (signed by top researchers & executives), with increasing urgency and specificity. The stakes are not obscure, they are widely articulated and, at least among experts, broadly understood. What has not followed is the second half of the sequence: a commensurate institutional response. And the reason it has not followed is not a failure of courage or eloquence. These voices have been both clear and persistent. Dario Amodei, for example, expanded on these concerns in a 38-page essay in January 2026, following his earlier endorsement of the Center for AI Safety’s May 2023 statement on AI risks.

It is, rather, a failure of architecture: a mismatch between the scale and speed of technological development and the structures of governance, coordination, and regulation available to respond.

Why the First Half – the Einstein Moment – Worked Then

What made Einstein’s warning actionable was not the warning itself. It was the environment it landed in. There was a bounded technology, fission, with a specific and identifiable danger. There was a nation-state capable of receiving and acting on a warning. There was a recognizable enemy giving the threat........

© The Times of Israel (Blogs)