Can Prospects for Nuclear War Get Any Worse? Sure, We Can Put AI in Charge
Can we possibly get away from AI’s ubiquitous presence in our lives? But as long as AI is now in our faces 24/7, it’s time to seriously start pushing back about its outsized and overwhelming influence. Troubling stories tumble out of the media daily. Employees in a major fast-food chain must now wear AI headsets that tell them how friendly they’re being to customers and coaching them on their work. (AI is now posing as our servant, but in the years ahead will the dynamic be reversed?)
And then there is the looming data center controversy, with Big Tech companies rapidly taking over huge swaths of land across the US to build massive and environmentally unfriendly data centers. Fortunately, this trend is now emerging as a campaign issue given early and cascading effects on electricity prices. In general, AI is having a tough year in the court of public opinion. Witness this cover story in a recent issue of Time magazine: “The People vs AI.” The article noted that “a growing cross section of the public—from MAGA loyalists to Democratic socialists, pastors to policymakers, nurses to filmmakers—agree on at least one thing: AI is moving too fast…. A 2025 Pew poll found… the public thinks AI will worsen our ability to think creatively, form meaningful relationships, and make difficult decisions.” Along with Immigration and Customs Enforcement-related pushback, a spontaneous wellspring of grassroots activism appears to be bubbling up against the AI juggernaut and the patently undemocratic backdoor power grab by technocrats and the companies behind them.
One of the greatest concerns in the public sphere is AI’s rapid incorporation into present and future military campaigns. This is actively being encouraged by the Trump administration’s decision to give AI companies free reign to develop their products with minimal regulation and oversight. This is an existential train wreck waiting to happen, and it came into striking focus in the monthslong dispute between AI company Anthropic and the Pentagon. Although it was already using the Claude platform, Secretary of War Pete Hegseth was unhappy with the company’s refusal to use it to remove human decision-making from military operations and support accelerated mass surveillance of US citizens.
Anthropic’s move was that rarity in Big Tech circles, a strong and principled ethical stand against an administration that doesn’t seem to know what that is. Happy warrior Hegseth then branded the company as a “supply chain risk,” effectively banning further use by the Pentagon and punishing the company’s overall viability in the non-defense marketplace as well. Ever the opportunist, the CEO of OpenAI, Sam Altman, then jumped in to offer his AI platform to do what Anthropic wouldn’t. The matter is now in the courts.
Handing AI the “Nuclear Football”
Using AI to create what are called autonomous systems represents a quantum leap in the rapidly advancing business of modern weaponry. Paradoxically, weapons technology is being simultaneously downsized through the use of drones and smaller and sophisticated high-tech devices (such as mine sniffers) and upsized with the use of the AI systems designed to manage and control them.
This raises the very troubling picture of wars being conducted without much human oversight. It’s probably one reason even high- profile AI influencers and Big Tech CEOs have admitted (sometimes a little too casually) that the technology could destroy humanity given the right set of circumstances. While autonomous systems can apply to stand-alone weapons such as killer robots, the most worrying concern relates to the Pentagon’s desire to build and deploy command-and-control systems that remove military officers from the split-second decisions that need to be made in warfare. And yes, that includes nuclear weapons.
If AI is truly as superintelligent (and sentient) as its Big Tech proponents........
