What happened when they installed ChatGPT on a nuclear supercomputer
The context you need, when you need it
When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.
We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?
What happened when they installed ChatGPT on a nuclear supercomputer
How they’re using AI at the lab that created the atom bomb.
If there’s anything that makes people more uncomfortable than highly advanced AI or nuclear weapons technology, it’s the combination of the two. But there’s been a symbiotic relationship between cutting-edge computing and America’s nuclear weapons program since the very beginning.
In the fall of 1943, Nicholas Metropolis and Richard Feynman, two physicists working on the top-secret atomic bomb project at Los Alamos, decided to set up a contest between humans and machines.
Los Alamos National Laboratory recently partnered with OpenAI to install its flagship ChatGPT AI model on the supercomputers used to process nuclear weapons testing data. It’s the latest in a long history of symbiosis between America’s nuclear program and cutting edge computing.
AI tools are already revolutionizing the way scientists are conducting research at Los Alamos, part of a larger program called Genesis Mission that aims to harness the technology to accelerate scientific research at America’s national labs.
Comparisons of AI to the early days of nuclear weapons abound, both among critics and proponents, but Vox’s reporting trip to the lab found little evidence of the kind of doomsday fears the permeate conversations about AI elsewhere.
In the early days of the Manhattan Project, the only “computers” on site were humans, many of them the wives of scientists working on the project, performing thousands of equations on bulky analog desk calculators. It was painstaking and exhausting work, and the calculators were constantly breaking down under the demands of the lab, so the researchers began to experiment with using IBM punch-card machines — the cutting edge of computer technology at the time. Metropolis and Feynman set up a trial, giving the IBMs and the human computers the same complex problem to solve.
As the Los Alamos physicist Herbert Anderson later recalled, “For the first two days the two teams were neck and neck — the hand-calculators were very good. But it turned out that they tired and couldn’t keep up their fast pace. The punched-card machines didn’t tire, and in the next day or two they forged ahead. Finally everyone had to concede that the new system was an improvement.”
Today, at Los Alamos, a similar dynamic is taking place, as scientists at the lab increasingly rely on artificial intelligence tools for their most ambitious research. Like their punch-card ancestors, today’s AI models have a leg up on human researchers simply by virtue of not having to eat, sleep, or take breaks. Scientists say they’re also approaching tough problems in entirely new and unexpected ways, changing how research is conducted at one of America’s largest scientific institutions.
In recent weeks, in the wake of the feud between the Pentagon and Anthropic, as well as the reported use of AI software for targeting during the war in Iran, the partnership between the US military and leading AI companies has become a highly charged political topic. Less discussed has been the already extensive cooperation between these firms and the country’s nuclear weapons complex, under the supervision of the Department of Energy.
Last year, the Los Alamos National Lab (LANL) entered a partnership with OpenAI allowing it to install the company’s popular ChatGPT AI system on Venado, one of the world’s most powerful supercomputers. As of August, Venado was placed on a classified network, meaning that the AI chatbot now has access to some of the country’s most sensitive scientific data on nuclear weapons.
That wasn’t all. Later last year, the Department of Energy, which oversees Los Alamos and the country’s 16 other national laboratories, announced a $320 million initiative known as the Genesis Mission, which aims to “harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade.”
Few people are in a better position to think about the upsides and downsides of revolutionary new technologies than the people who today populate the mesa once occupied by Robert Oppenheimer, Feynman, and the other pioneers of the nuclear age. But when I visited the lab in January, I found that the researchers there were remarkably sanguine about the more existential risks that often come up in conversation about AI, even as they worked on the production of the world’s most dangerous weapons.
“They think we’re building Skynet; that’s not what’s going on here at all,” LANL’s deputy director of weapons, Bob Webster, said, referring to the superintelligent system from the Terminator movies. Geoff Fairchild, deputy director for the National Security AI Office, volunteered that he does not have a “p(doom),” the Silicon Valley shorthand for how likely one believes it is that AI will lead to globally catastrophic outcomes, and doesn’t believe most of his colleagues do either. “We don’t talk about it. I don’t think I’ve ever had that conversation,” he added.
For Alex Scheinker, a physicist who uses AI for the maintenance and operation of LANL’s massive particle accelerator, AI is an extraordinarily useful tool, but a tool nonetheless. “It’s just more math,” he said. “I don’t like to think about it like it’s magic.”
Still, the nuclear-AI comparison is unavoidable. Given the technology’s transformative potential, the dangers it could pose to humanity, and the potential for an innovation “arms race” between the United States and its international rivals, the current state of AI has frequently been compared to the early days of the nuclear........
