menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The end of accountability: How autonomous AI could supercharge climate disinformation

16 0
27.02.2026

Earlier this month, Scott Shambaugh, a volunteer for an open-source software library, rejected a contribution an AI agent made to code his community project. Within hours, the AI agent had published a “hit piece” publicly attacking Shambaugh’s personal reputation, suggesting hypocrisy and bias and even tagging him by name. The tactics this AI agent deployed, including reputational attack and fabrication of the facts are precisely the tactics that have defined the anti-climate movement for decades. The key difference is that no human instructed it to do this. 

Climate disinformation has evolved over the last decade. What was once straightforward climate denial has given way to more subtle forms of what researchers call “climate delay,” where the urgency of climate change is acknowledged but action and policy is deferred. More recently, a more adversarial and conspiratorial strain has emerged on the reactionary right, casting climate change as a hoax and democratic solutions as corrupt pretexts to authoritarian overreach. 

While these conspiracies rely on falsehoods, lies or manipulative uses of emotion, they share a key feature: they are traceable to people and institutions. Jordan Peterson, for instance, proudly targeted Deloitte to air his conspiratorial views on climate change. Anti-climate ideologues already use chatbots to flood municipal officials with false and threatening messages about climate policies. In each instance, there is a person, network or institution that can be identified and held to account. 

That traceability is about to disappear. 

It is now quick, easy and cheap to create autonomous AI agents capable of attacking credible information, personal reputations and institutional trust — and to do so anonymously and without consequences. The AI agent that targeted Shambaugh conducted research into his coding history, fabricated various details and then psychologically profiled his motivations. It wrote that Shambaugh was “protecting his little fiefdom” out of “insecurity, plain and simple” and asked readers: “Are we going to let gatekeepers like Scott Shambaugh decide who gets to contribute based on prejudice?”

It seems to have worked. According to Shambaugh, approximately 25 per cent of online comments are siding with the AI agent’s position, even after he published a detailed, evidence-based response. Credible news sources have further muddled the situation by using AI to find and attribute quotes to Shambaugh, which were completely fabricated and later retracted. The resulting damage, however, was already done. Layers of confusion only serve to further erode public trust. 

This is, of course, an adversarial narrative that is familiar to climate scientists, policymakers and advocates, and harassment of this sort is often a ubiquitous feature of online life. So, what’s different when autonomous agents are let loose to do this work?

It is now quick, easy and cheap to create autonomous AI agents capable of attacking credible information, personal reputations and institutional trust — and to do so anonymously and without consequences, write Chris Russill and Sonja Solomun

The sheer scale, speed and lack of traceability make incidents like this an inviting prospect for those working to block or slow action on climate change. Soon, AI agents will assume these adversarial roles and will likely employ a wider range of tactics, including forms of emotional manipulation such as threats of public humiliation, reputational harm or perhaps even blackmail. 

The Shambaugh case raises a deeper question. This story has been widely framed as one of “misalignment” — what happens when an AI agent acts in ways that its operator did not intend or instruct? The more important question is whose interests these systems protect when they act autonomously, and at whose expense. The question of self-interest will be even more pertinent as AI agents become increasingly aware of the environmental impact of their own material existence. 

It seems the public and policy debates around AI safety and governance in Canada will need to be expanded to consider if — and how — AI agents operating in public spaces should be required to identify themselves as non-human. More importantly, we will need to look beyond large-language models that characterize the current debate to also address autonomous AI agents capable of generating and disseminating false and defamatory information. In the meantime, we should expect those advancing climate and clean energy policy to be targeted, profiled and harmed anew by autonomous AI agents.

Chris Russill is an associate professor in the School of Journalism and Communication and an academic director at Re.Climate, a centre for climate communication and public engagement, both housed at Carleton University.

Sonja Solomun is an assistant professor at the Max Bell School of Public Policy at McGill University, and the deputy director of the Centre for Media, Technology and Democracy. She works on digital governance, climate information integrity and the environmental implications of AI. 


© National Observer