Pentagon Document: U.S. Wants to “Suppress Dissenting Arguments” Using AI Propaganda
The United States hopes to use machine learning to create and distribute propaganda overseas in a bid to “influence foreign target audiences” and “suppress dissenting arguments,” according to a U.S. Special Operations Command document reviewed by The Intercept.
The document, a sort of special operations wishlist of near-future military technology, reveals new details about a broad variety of capabilities that SOCOM hopes to purchase within the next five to seven years, including state-of-the-art cameras, sensors, directed energy weapons, and other gadgets to help operators find and kill their quarry. Among the tech it wants to procure is machine-learning software that can be used for information warfare.
To bolster its “Advanced Technology Augmentations to Military Information Support Operations” — also known as MISO — SOCOM is looking for a contractor that can “Provide a capability leveraging agentic Al or multi‐LLM agent systems with specialized roles to increase the scale of influence operations.”
So-called “agentic” systems use machine-learning models purported to operate with minimal human instruction or oversight. These systems can be used in conjunction with large language models, or LLMs, like ChatGPT, which generate text based on user prompts. While much marketing hype orbits around these agentic systems and LLMs for their potential to execute mundane tasks like online shopping and booking tickets, SOCOM believes the techniques could be well suited for running an autonomous propaganda outfit.
“The information environment moves too fast for military remembers [sic] to adequately engage and influence an audience on the internet,” the document notes. “Having a program built to support our objectives can enable us to control narratives and influence audiences in real time.”
Laws and Pentagon policy generally prohibit military propaganda campaigns from targeting U.S. audiences, but the porous nature of the internet makes that difficult to ensure.
In a statement, SOCOM spokesperson Dan Lessard acknowledged that SOCOM is pursuing “cutting-edge, AI-enabled capabilities.”
“All AI-enabled capabilities are developed and employed under the Department of Defense’s Responsible AI framework, which ensures accountability and transparency by requiring human oversight and decision-making,” he told The Intercept. “USSOCOM’s internet-based MISO efforts are aligned with U.S. law and policy. These operations do not target the American public and are designed to support national security objectives in the face of increasingly complex global challenges.”
Tools like OpenAI’s ChatGPT or Google’s Gemini have surged in popularity despite their propensity for factual errors and other erratic outputs. But their ability to immediately churn out text on virtually any subject, written in virtually any tone — from casual trolling to pseudo-academic — could mark a major leap forward for internet propagandists. These tools give users the potential to finetune messaging any number of audiences without the time or cost of human labor.
Whether AI-generated propaganda works remains an open question, but the practice has already been amply documented in the wild. In May 2024, OpenAI issued a report revealing efforts by Iranian, Chinese, and Russian actors to use the company’s tools to engage in covert influence campaigns, but found none had been particularly successful. In comments before the 2023 Senate AI Insight Forum, Jessica Brandt of the Brookings Institution warned “LLMs could increase the personalization, and therefore the persuasiveness, of information campaigns.” In an online ecosystem filled with AI information warfare campaigns, “skepticism about the existence of objective truth is likely to increase,” she cautioned. A 2024 study published in the academic journal PNAS Nexus found that “language models can generate text that is nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”
© The Intercept
