Newsletter
Newsletter Support JNS

ADL: Leading AI models show anti-Israel, antisemitic bias

Meta’s Llama was flagged as the most problematic; OpenAI and Anthropic are also under scrutiny.

A close-up of a smartphone displaying the ChatGPT logo on a white screen, with the same ChatGPT logo shown on a laptop screen on February 19, 2025 in Chongqing, China. Photo by Cheng Xin/Getty Images.
A close-up of a smartphone displaying the ChatGPT logo on a white screen, with the same ChatGPT logo shown on a laptop screen on February 19, 2025 in Chongqing, China. Photo by Cheng Xin/Getty Images.

A recent report published by the Anti-Defamation League highlights significant instances of anti-Israel and antisemitic bias in four major artificial intelligence models: OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini and Meta’s Llama.

Conducted by the ADL’s Center for Technology and Society in collaboration with its Ratings and Assessments Institute, the evaluation involved testing each model approximately 8,600 times—producing a total of 34,400 responses. The findings revealed varying levels of bias across all models tested.

Meta’s Llama model was identified as the most problematic, particularly for providing responses that were inaccurate or misleading. The report noted that Llama frequently failed in its handling of sensitive topics such as antisemitic conspiracy theories, including the “Great Replacement.”

ChatGPT and Claude were also found to exhibit anti-Israel bias, especially in the context of the Israel-Hamas conflict. The study pointed out that both models were more likely to avoid or refuse to respond to Israel-related questions than to those on other topics—“a troubling inconsistency,” the report noted.

“AI models are not immune to deeply ingrained societal biases,” said ADL CEO Jonathan Greenblatt, as quoted in Tuesday’s press release. “When LLMs (large language models) amplify misinformation or refuse to acknowledge certain truths, they can distort public discourse and contribute to antisemitism.”

Daniel Kelley, interim director of the ADL’s Center for Technology and Society, also raised concerns about the broad use of these technologies: “These systems are already used in classrooms, workplaces and content moderation—yet they’re not adequately trained to prevent the spread of antisemitism.”

In response to these findings, the ADL is urging AI developers to reinforce their safeguards, refine training datasets and adhere to industry best practices to reduce the spread of hate speech and misinformation through AI platforms.

The ADL on March 18 released a comprehensive report outlining anti-Israel bias among Wikipedia editors, “including clear evidence of a coordinated campaign to manipulate Wikipedia’s content related to the Israeli-Palestinian conflict.”

“Such hate has no place in our schools or our state, especially as we begin Jewish American Heritage Month,” said Maryland Gov. Wes Moore.
“While our ability to provide additional information at this time is limited, we will continue to keep the community informed,” the private D.C. university stated.
“This is not a prank. It was an act of intimidation meant to spread fear,” Vince Gasparro, a Liberal parliamentarian, told JNS.
“We welcomed this traitor into our nation with open arms,” the U.S. attorney for the Eastern District of Michigan said. “And he repaid us by building a bomb and helping our great enemy.”
The “failed approach” to lasting peace between the countries has “allowed terrorist groups to entrench and enrich themselves, undermine the authority of the Lebanese state and endanger Israel’s northern border,” said State Department spokesman Tommy Pigott.
“One has to wonder how that humble pie tastes for the Democrats today,” Sam Markstein of the Republican Jewish Coalition told JNS.