AI Has an Antisemitism Problem – That It’s Learning From Us
Jonathan Raa|NurPhoto via Getty Images
The Grok X AI app is displayed on a mobile phone in this photo illustration.
In July, Grok, the artificial intelligence chatbot from Elon Musk's xAI, dubbed itself “MechaHitler,” capping off an entire day spent spewing antisemitic rhetoric. Its deeply offensive responses ranged from accusations of Jewish people “pushing anti-white hate” to praise of Hitler and even what appeared to be an endorsement of the Holocaust.
Following this incident, Elon Musk stated that steps were being taken to improve the chatbot. In October, Musk announced the imminent release of “Grokipedia,” a supposed competitor to Wikipedia – based on the Grok AI – that will be smart enough to “remove falsehoods, correct half-truths, and add crucial and missing context” to its encyclopedia entries. But who is determining what these falsehoods and half-truths are? The same program that praised Hitler just a few months earlier?
Unfortunately, Grok isn’t the first chatbot to spread antisemitism, nor will it be the last. AI has an antisemitism problem – that it’s learning from us.
Over the last decade, the internet has become fertile ground for violent extremism and antisemitism. Platforms such as TikTok, X, Facebook and 4chan have been linked to the propagation of ancient antisemitic tropes, dangerous conspiracy theories and incitements to violence against the Jewish community.
Allison StangerFeb. 11, 2025
To make matters worse, instances of online antisemitism have skyrocketed in recent years, particularly since the Hamas attack on Israel on Oct. 7, 2023. In the immediate wake of the tragedy, the Anti-Defamation League reported a 360% surge in online antisemitism incidents. Almost two years later, roughly two-thirds of American Jews report encountering antisemitism online or on social media, according to a recent report by the American Jewish Committee. Unfortunately, this hate is not confined to rhetoric. The Pittsburgh Tree of Life synagogue shooting in 2018 and the murder of two young Israeli embassy staffers this past June are just two high-profile cases of lives lost to antisemitism.
In short, antisemitism in online media is not a new – or fading – issue; however, it gains new importance when it influences AI systems. Fundamentally, AI learns by processing vast amounts of data, recognizing patterns and identifying trends to mimic human thought. Providing an AI program access to the internet gives it an almost limitless supply of learning resources, including research, news articles, surveys and more. Yet, this also means it can access social media and discussion forums, where personal biases and opinions often supersede facts.
And it’s already spreading. A study released by StopAntisemitism in August found that AI models such as ChatGPT, Claude, Grok and Perplexity display “concerning behavior that demonstrates the need to create stronger safeguards in those systems to fight potential antisemitic behavior and tropes.”
As AI becomes more ubiquitous, accountability and subsequent technological enhancements are crucial if tech companies are genuinely committed to eradicating the years of antisemitic content from which AI programs aggregate their information. But that is just the start. The federal government must also play a role.
Earlier this year, the Trump administration unveiled its AI Action Plan – part of which seeks to purge AI systems of what the administration considers “ideological bias and engineered social agendas.” The action plan provides a great foundation, but there are other steps legislators can take immediately. The administration and all federal, state and local officials should focus on their roles as consumers – not just regulators – and only utilize AI that has been developed without stereotypes or biases, just as the executive action implores. This is made more essential as federal workers lean into AI advances. Our administration and state governments can lead in the development of appropriate AI without infringing on anyone’s speech and without engaging in overregulation.
Andy KurtzigJune 2, 2025
Simultaneously, technology companies must react quickly and efficiently to antisemitic materials perpetuated by AI programs, removing discriminatory comments as soon as they are flagged. Not only must these companies respond in the immediate term, but they must implement fundamental safeguards in their AI processes, including human oversight, improved ethical standards and training. They are the ones creating and introducing this new technology; the responsibility for doing this safely is on them. These measures, however, will be ineffective unless companies integrate appropriate antisemitism standards, such as recognizing the definition of antisemitism provided by the International Holocaust Remembrance Alliance and educating their AI systems accordingly.
Once an AI program has been trained on a body of unreliable and biased material, its core assumptions are set. But if we provide safeguards, they can act as a filter that may help prevent AI programs from expressing some of the biased conclusions that it can reach.
Technology companies thrive on innovation and progress, anticipating issues – and often solving them – before they even happen. Why won’t they do the same for antisemitism?
Kenneth L. Marcus is the chairman and CEO of The Louis D. Brandeis Center for Human Rights Under Law and the former Assistant Secretary for Civil Rights at the United States Department of Education under two administrations.
Tags: artificial intelligence, Elon Musk
Read More
