War by Machine, Hatred by Algorithm
Much is being said about the need for meaningful human control over AI in war. Human beings should not surrender moral agency to machines. A weapon does not become less our responsibility because an algorithm helped guide it.
But the same principle should apply beyond the battlefield.
There must also be meaningful human responsibility in the information domain. Platforms cannot shrug and say that their systems merely reflect engagement. Engineers cannot pretend that recommender systems are neutral when they consistently reward outrage, simplification, tribalism, and spectacle. Political actors cannot hide behind “the discourse” when they knowingly exploit manipulated or unverified content. Influencers cannot wash their hands of responsibility after laundering lies into mainstream conversation.
If AI-generated deception helps drive hatred against Jews, then this is not just a moderation problem. It is a moral and civic problem. It is also, increasingly, a communal security problem because once falsehoods are repeated often enough, they do not remain abstract. They lead to threats outside synagogues, harassment in schools, intimidation on campuses, vandalism, exclusion in professional spaces, and a broader atmosphere in which Jews are treated as suspect, stained, or collectively guilty. The digital world does not stay digital for long.
Narrative shapes legitimacy. Legitimacy shapes pressure. Pressure shapes institutions. Institutions shape power.
Anthropic and the Illusion of Ethics
The recent dispute between the Pentagon and Anthropic highlights fears held by many that artificial intelligence could one day be used by governments to perpetrate acts of surveillance or violence on domestic or foreign populations. To hedge against abuse, AI companies place usage limitation guardrails and governments sometimes seek exceptions. By refusing Department of War demands to lessen restrictions, Anthropic became a folk sensation and the leading downloaded app overnight.
The Jewish community also noticed; The Genesis Prize highlighted the Jewish founders of Anthropic, Dario and Daniela Amodei, as examples of “principled restraint” for making “human dignity non-negotiable,” and calling their actions “courageous” and “Jewish ethics and leadership for the AI age.”
Yet, while Anthropic was basking in the praise for its principled stand, it was making major changes to its foundational safety policies. Among the deviations was the removal of their commitment not to train models so powerful they cannot be controlled and the elimination of their pledge to pause AI model training if its safety standards are not being met.
In its dispute with the Pentagon Anthropic played the AI company with a conscience that can be trusted to steward this amazingly powerful new technology. By changing its safety protocol, Anthropic surrendered to commercial pressures, raising questions about the viability of ethical leadership in the high-stakes, competitive AI environment.
There are reasons to be concerned. While there is a growing public conversation about artificial intelligence and war, much of it remains too narrow.
AI in wartime usually refers to autonomous weapons, targeting systems, battlefield surveillance, and the danger of machines making or assisting in decisions that, for ethical reasons, should be reserved solely for human beings. These concerns are serious and urgent. We cannot allow an erosion of human responsibility as warfare is increasingly mediated through code. This, and many other ethical concerns deserve rigorous debate, legal scrutiny, and moral seriousness. But if that is the only conversation we are having, then we are missing something equally dangerous.
AI is not only reshaping the hard war of missiles, drones, and military operations. It is also transforming the soft war of narrative, perception, identity, and persuasion. And in that domain, the consequences may be no less grave. Social media has become a battlefield in its own right, and AI is rapidly becoming one of its most powerful weapons.
This matters deeply to Jews because in moments of war, instability, and moral confusion, Jews have rarely faced danger only on the physical battlefield. We have also faced danger in the realm of story: the story told about us, the lies spread about us, the images used to provoke disgust, the accusations designed to isolate us, and the myths engineered to turn societies against us. Long before the digital age, antisemitism depended on fabrication, projection, and repetition. It relied on falsehoods that made others feel morally superior, that were urgent and that could serve as justification for physical violence against us. Today, AI gives those same ancient mechanisms unprecedented scale, speed, and sophistication.
That is why any serious conversation about AI and war must include not only how AI is used in combat, but how it is used online to generate falsehoods, manipulate public sentiment, and fuel antisemitic and antizionist hate campaigns.
War is no longer fought only on land, at sea, or in the air. It is also fought in social media feeds.
The struggle for public perception is now inseparable from the struggle on the battleground. Before official statements are issued and before evidence is reviewed, millions of people have already encountered clips, screenshots, memes, edited videos, emotional testimonials, and AI-generated imagery crafted for maximum outrage. What matters in these environments is virality, not accuracy.
AI allows bad actors to create persuasive misinformation faster, cheaper, and at greater volume than ever before. It can generate fake images that look documentary and videos that appear authentic to the casual eye. It can imitate the tone of journalists, activists, witnesses, and scholars and produce posts, threads, and captions optimized for emotion and engagement. It can translate propaganda instantly into multiple languages, tailoring the same lie for different audiences. It can test which messages provoke the strongest reaction, then replicate and scale them. In other words, AI does not merely spread misinformation, it industrializes it.
And once misinformation enters the bloodstream of social media, it does not remain confined to screens. It influences public opinion, activism, campus discourse, media framing, institutional language, and even policy conversations. It affects how Jews are seen, how Israel is judged, and how moral categories themselves are assigned.
That is where the danger becomes especially acute.
The Collapse of Distinction
A healthy society must preserve distinctions. It must distinguish between truth and fiction, between evidence and rumor, and between criticism and demonization. One of the most destructive effects of the current AI-powered information environment is that it erodes these distinctions.
Consider one of the most important distinctions in Jewish public life today: the distinction between criticism of Israeli policy and hatred directed at Jews. This distinction matters. Democratic societies should permit criticism of any government, including Israel’s. Jews themselves have always argued passionately about power, ethics, leadership, and collective responsibility. There is nothing inherently antisemitic about questioning Israeli policy or opposing particular military actions. But that distinction is often not what we are seeing online.
What we are seeing, increasingly, is the weaponization of political language in order to legitimize moral extremism. We are seeing material that does not critique specific policies but instead recycles ancient themes in updated form: Jews as uniquely malevolent, Jews as global manipulators, Jews as child-killers, Jews as fabricators of victimhood, Jews as secretly controlling media or finance or governments, Jews as somehow outside the boundaries of normal moral concern. We are seeing Zionism presented not as a political ideology that can be debated, but as a metaphysical evil into which any Jew can be absorbed by association. We are seeing Jews everywhere held responsible, while no such collective standard is demanded of others.
In this environment, antizionism often functions not merely as a political position but as a vessel through which older antisemitic energies can flow with contemporary legitimacy.
AI makes this worse by helping produce enormous quantities of emotionally charged, seemingly authoritative, and often misleading content that blurs the line between analysis and incitement. It helps create the illusion that every accusation is already proven, every rumor is already documented, and every libel has already been corroborated. It manufactures atmosphere, and atmosphere is powerful. Once a moral atmosphere forms online, people often consume facts through that lens rather than forming a lens from facts. For Jews, this pattern is tragically familiar.
Ancient Hatreds, New Machinery
The printing press did not eliminate conspiracy. Radio did not eliminate demagoguery. Film did not eliminate propaganda. The internet did not eliminate antisemitism. AI will not eliminate any of it either. It may, in fact, deepen it.
AI absorbs old tropes and repackages them in contemporary idioms. It borrows the language of human rights while stripping away the discipline of truth. It produces material that feels educational, moral, and urgent while quietly embedding age-old distortions beneath the surface.
This is one reason the Jewish community must resist the temptation to treat digital falsehoods as secondary or less serious than physical threats. They are not secondary. They are preparatory.
Before many forms of violence come justification, and before justification comes narrative. Before narrative comes repetition, and before repetition now comes automation.
Why Some Still Miss the Point
Part of the problem is conceptual. Many people still imagine warfare in strictly kinetic terms. They think the real danger lies where bodies are immediately at risk and that everything else is downstream commentary. But modern conflict does not work that way. Information is not ancillary to war; it is one of war’s primary instruments.
A manipulated video may not kill in the way a bomb kills, but it can incite mobs, justify exclusion, spread panic, distort diplomacy, and turn whole populations morally numb to the suffering of those they have been taught to despise. It can help create a world in which Jews are stripped of credibility, dignity, and safety.
This is why the fixation on AI as a military issue, while understandable, is incomplete. It is like studying only the weapon and ignoring the propaganda ministry. Only now the propaganda ministry is decentralized, networked, personalized, and endlessly scalable.
And unlike old propaganda systems, this one does not always need a state. It can be run by ideological activists, loose networks, troll farms, bot networks, extremist communities, opportunists seeking clicks, or even ordinary users armed with the tools of synthetic persuasion.
For Jewish media, for Jewish leaders, and for Jewish institutions, this moment requires clarity. The challenge is not simply to call out antisemitism after it appears. The challenge is to understand the machinery that now helps produce and spread it. AI has lowered the cost falsification and emotional manipulation. And because social platforms privilege what travels fastest, not what is most true, the incentives are aligned in the wrong direction.
This means the Jewish community must think strategically, not only defensively. It means developing stronger literacy about how AI-generated misinformation works and teaching people to slow down before sharing emotionally explosive content. It means supporting researchers and watchdogs who track coordinated disinformation and demanding more transparency from platforms about synthetic media, bot activity, and algorithmic amplification.
It also means insisting that public institutions, journalists, and civic leaders learn the difference between legitimate political criticism and the digital laundering of antisemitic tropes through antizionist vocabulary. If that distinction collapses Jews pay the price first, but the larger casualty is democratic culture itself. A society that cannot tell the difference between fact and fabrication, between moral argument and engineered hysteria, will not remain healthy for anyone.
A Broader Moral Framework
None of this means we should minimize the dangers of AI in military settings. We should not. The use of AI in surveillance, targeting, intelligence analysis, and autonomous systems may reshape war in ways that demand urgent legal and ethical intervention. Jewish tradition, with its deep concern for human dignity, moral accountability, and the sanctity of life, has much to say about the danger of placing irreversible power into systems that can diffuse responsibility while preserving lethality.
But Jewish ethics also has much to say about speech, truth, rumor, slander, and public harm. Lashon hara is not identical to modern disinformation, but it reflects a civilizational insight: words destroy worlds. Falsehood does not only misdescribe reality, it reshapes it. A lie can damage a person, a people, or a social fabric long before any formal act of violence occurs. The Torah’s insistence on justice is inseparable from its insistence on truthful witnessing. A society in which false testimony spreads unchecked is a society in moral danger.
AI places this danger on a new scale. It enables something close to mass-produced false witness.
That should alarm us not only as Jews, but as citizens of democracies, participants in public life, and custodians of memory.
So the real question is not whether AI belongs in the conversation about war. Of course it does.
The real question is whether we are willing to understand war broadly enough.
We cannot debate autonomous targeting while neglecting automated incitement, or seek to control the machine on the battlefield while underestimating the machine in the feed. If we define war only as physical confrontation, then we will miss one of the most consequential transformations of our time.
But for Jews, the lesson of history should be unmistakable: lies matter, images matter, public emotion matters. The social permission structure that forms around a people matters. And when falsehood about Jews becomes ambient, fashionable, and morally praised, danger is never far behind.
AI has opened two fronts. One is the visible front of warfare, where algorithms increasingly shape how force is deployed. The other is the psychological and cultural front, where algorithms form what millions of people believe, whom they blame, and what forms of hatred they are prepared to excuse. Both deserve attention, demand regulation, and require moral language equal to the moment. If we fail to confront the second because we are mesmerized by the first, we may discover too late that one of the most powerful uses of AI in war was not to make weapons smarter, but to make lies stronger.
