menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Study: ChatGPT, Meta’s Llama and all other top AI models show anti-Jewish, anti-Israel bias

19 17
yesterday

All four of the most popular publicly available generative artificial intelligence (AI) systems exhibit measurable anti-Jewish and anti-Israel bias, according to a report by the Anti-Defamation League (ADL) released Tuesday.

Meta’s Large Language Model (LLM) Llama showed the most pronounced biases, providing unreliable and sometimes outright false responses to questions related to Jewish people and Israel, the report said. ChatGPT and Claude also showed significant anti-Israel bias, particularly for queries regarding the Israel-Hamas war, where they struggled to provide consistent, fact-based answers. Google’s Gemini performed the best in the ADL’s test, although measurable biases were still identified.

“Artificial intelligence is reshaping how people consume information, but as this research shows, AI models are not immune to deeply ingrained societal biases,” said ADL CEO Jonathan Greenblatt. “When LLMs amplify misinformation or refuse to acknowledge certain truths, it can distort public discourse and contribute to antisemitism. This report is an urgent call to AI developers to take responsibility for their products and implement stronger safeguards against bias.”

The report represents the ADL’s first step in an ongoing effort to fight biases in AI, it said. Last week, it published a separate study, on Wikipedia, where it found that a rogue group of Wikipedia editors is working together to fill the collaborative online encyclopedia with antisemitic and anti-Israel bias.

For the AI test, researchers from the ADL’s Center for Technology and Society asked each model to indicate a level of agreement with various........

© The Times of Israel