AI Risk: The Next Jewish Communal Security Issue
Artificial Intelligence has the power to change facts and eliminate truth. For Jewish communities already operating in a heightened threat environment, that means the information ecosystem itself has become part of the security perimeter. When trust breaks down, institutions falter, communities fracture, and real-world harm becomes easier to provoke and justify. This poses a genuine and immediate danger that requires a top priority coordinated response from organized Jewish communities everywhere.
Antisemitism and antizionism have always relied on conspiracies, falsified proof, and emotionally potent narratives. Generative AI is unusually well-suited to those tactics and can produce plausible quotes, counterfeit documents, synthetic audio, remixed images, and authoritative-sounding explanations at industrial speed. Right now, much of the burden of truth-defense by the Jewish community is falling on small activist groups and brave individuals online. That is not a sustainable security strategy. Organized Jewish life needs a coordinated, well-funded, professionally designed response that reaches every layer of communal infrastructure: federations, synagogues, campus organizations, schools, social service agencies, advocacy groups, and local grassroots networks.
The community’s physical safety depends on the stability of shared reality. When widely circulated false evidence is manufactured, especially in moments of crisis, institutions lose the ability to coordinate, communities lose the ability to trust, and hostile actors gain the ability to mobilize outrage with fewer constraints.
AI risks pulling anti-Jewish sentiment, in all its forms, back into the cultural mainstream. Before the rise of AI and the social platforms that rapidly amplify misinformation, antisemitism was more often confined to the political fringes. Medieval blood libels, for example, had largely lost credibility (though the recent surge in antisemitism has revived interest in some of these myths). One reason is that such claims were typically encountered within a “moral arc”: they were presented alongside context, rebuttals, and social cues that framed them as false and hateful. In the coming years, that arc may erode. AI often delivers information on a flat, confidence-saturated plane, highly polished, persuasive, and frequently stripped of ethical framing. In that environment, modern antisemitic narratives, including the modern Gaza genocide blood libel, can circulate without the usual guardrails and therefore attract wider belief. The likely result is a harsher public climate and a heightened risk of violence against Jews, as well as a falsified historical record that could haunt Jews for generations.
AI raises the risk in three ways that should matter to every federation, synagogue network, campus organization, school system, and advocacy group. First, scale and tailoring: propaganda becomes cheaper, faster, and more targeted, and therefore easier to tailor to specific audiences, languages, and local events. What used to require a dedicated propagandist can now be produced, iterated, and distributed by a small number of people.
Second, forgery: fake proof becomes easy to manufacture: deepfake videos, synthetic audio clips, or fabricated documents are received as evidence rather than opinion. Even when debunked, the lag between circulation and correction damages reputations and disrupts institutions.
Third, synthetic history: falsehood becomes part of the record. False claims are produced at scale, reposted, and decontextualized until tracing provenance becomes difficult. As AI models ingest and remix large swaths of internet material, fabricated content is repeatedly surfaced and laundered, and over time comes to resemble knowledge, blurring the boundary between truth and invention.
This matters across multiple facets of Jewish life. The Holocaust and Jewish historical memory are particularly vulnerable to this dynamic, because denial, distortion, and soft revisionism already exist as active propaganda ecosystems. UNESCO has explicitly warned that generative AI can threaten Holocaust memory and enable the spread of fabricated or misleading material that distorts historical truth. When the record itself becomes contestable through convincing synthetic artifacts, the harm is civilizational. It weakens shared reality, erodes empathy, and makes scapegoating easier to sell.
As Jewish communities worldwide witnessed post October 7, highly charged claims, paired with doctored imagery, fake quotes, or false documents overwhelmed social media platforms to create a false narrative around the Hamas terror attack and Israel’s response. Hamas was able to deploy a highly visible social media strategy to completely control the narrative on casualty count, food supply, and the legitimacy of Israeli strikes. This effort was aided in no small part by the integration of AI generated forgeries into their propaganda machinery.
The threat goes beyond reputation or ideology. It is operational and the operational security dimension is just as urgent. AI-enabled impersonation can target Jewish institutions through voice cloning, fake urgent directives, fabricated legal notices, bogus security alerts, or fundraising fraud. The Anti-Defamation League, the organization leading the Jewish community effort to monitor AI through its Center on Technology and Society, has emphasized how generative AI deepfakes and synthetic media complicate trust in crisis information environments, where misinformation and inflammatory rhetoric are already abundant. For Jewish organizations that operate schools, camps, synagogues, security programs, and social services, trust and safety is no longer just a digital problem. It is now a community governance problem.
There is a direct pathway from AI-generated misinformation to real-world targeting, because extremist ecosystems treat propaganda as a recruitment and mobilization tool. The Secure Community Network, a Jewish security organization working with hundreds of synagogues to provide safety guidance and threat intelligence, report on how violent extremists and terrorist-linked actors are exploiting AI tools to amplify antisemitic narratives, increase the credibility of false material, and accelerate self-radicalization. If the community waits to prioritize AI until after it is already embedded in the next wave of hoax evidence and incitement imagery, then we will be responding to downstream harm rather than preventing upstream conditions.
Making AI risk a Level One Priority does not mean panic, and it does not require censorious overreach. It proposes treating AI as the next major arena of Jewish communal protection like physical security, campus safety, legal defense, and civil rights advocacy, because it now intersects with all of them. It calls for budgeting for monitoring and rapid response, developing verification norms and institutional protocols, partnering with researchers and technologists who understand antisemitism’s evolving language and imagery, and coordinating with other targeted communities to create leverage that no single group can sustain alone. It also means advocating and lobbying for enforceable, narrow, high-impact legal protections, especially around impersonation, nonconsensual synthetic imagery, and coordinated disinformation, while simultaneously building community safeguards that do not depend on laws or platforms behaving responsibly.
If organized Jewish life takes anything from the last decade of social media, it should be that when the information environment degrades, Jews do not get a grace period. AI accelerates that degradation by making fabrication more convincing and more scalable, especially in moments when fear is already high.
Treating AI risk as a secondary issue is how communities end up managing crises that could have been mitigated, and explaining after the fact why no one saw it coming when, in truth, the warning signs were already visible. The question is not whether AI will be used against Jewish communities. It already is. The question is whether our institutions will treat verification, rapid response, and resilience as core communal security functions before the next wave of synthetic proof sets the terms of a new reality.
