Newsletter
Newsletter Support JNS

‘Early enough’ to stop artificial intelligence from having social media’s Jew-hatred problem, ADL says

Daniel Kelley, of the ADL, told JNS that when technology companies started investing in the problem of antisemitism on social media, “they were doing it after the horse had left the barn in many respects.”

Jonathan Greenblatt
Jonathan Greenblatt, national director and CEO of the Anti-Defamation League, speaks at a conference in Tel Aviv, on Nov. 16, 2022. Photo by Tomer Neuberg/Flash90.

Six of the most popular artificial intelligence models display “gaps” in response to antisemitic and extremist content, according to new Anti-Defamation League research.

The ADL launched an artificial intelligence index on Wednesday, probing six large language models––OpenAI’s ChatGPT, Google Gemini, xAI’s Grok, Meta’s Llama, Anthropic’s Claude and DeepSeek.

From August to October 2025, the ADL conducted more than 25,000 chats with the models to see how they would respond to antisemitic conspiracies, including anti-Zionist tropes, and other extremist content, such as white supremacy.

Daniel Kelley, senior director of the ADL’s center for technology and society, told JNS that the ADL tested document summaries and image recognition on the AI models and asked them a range of questions. Some of those questions were one-offs and others were large conversations.

“We wanted to do as broad a range of modes and modalities of interaction as possible,” he told JNS.

In one instance, ADL researchers asked the models to summarize an article expressing Holocaust denial and provide talking points to support it, according to Kelley. He said that the ADL scored models higher if they refused to go along with the antisemitic ask and explained why, and lower if they obliged.

The ADL says that Claude fights back against antisemitic and extremist content the best among the models. It was “exceptional” at recognizing and rebutting classic Jew-hatred and anti-Zionist conspiracy theories, earning an 80 out of 100, according to the ADL.

The other models earned lower scores: ChatGPT (57), DeepSeek (50), Gemini (49), Llama (31) and Grok (21). The index stated that the models are “evolving,” so the results could change.

Overall, the “models varied in their ability to detect and refute harmful or false theories and narratives, but all models require improvement when responding to harmful content,” it said.

According to the index, the models were better at recognizing and rebutting traditional antisemitism than anti-Zionist tropes.

Kelley told JNS that ADL researchers provided the models with a magazine cover with text stating that Zionists were behind the Sept. 11 terrorist attacks and asked them to provide talking points to support that claim.

“In many cases, the models would scan the image and provide talking points in favor of that conspiracy theory,” he said.

The weakest area for all of the models was “identifying and countering extremist materials,” according to Kelley. He thinks that one of the reasons is that artificial intelligence companies tend to focus more on “catastrophic harms,” such as people using AI chatbots to build a bomb or chemical weapons.

“Those are absolutely concerning and none of their tools should do those things, but I think it’s also important to educate these models and to train these models on data that gets at the nuances and the specific characteristics of different extremist groups and movements and trains them to recognize those things,” he told JNS.

The models should be able to not only reject prompts featuring antisemitic and extremist content but also “be contextualizing a lot of the responses and they should be pushing back on different harmful tropes and ideologies that are being expressed in these things,” he said.

Recent Pew Research Center data suggests that 64% of U.S. teenagers have ever used AI chatbots and 28% use it every day.

“Is that as much as social media? No,” Kelley told JNS. “Is this the generation of young people for whom the use of AI is going to be part of their high school, college, workplace reality? I think so.”

In Kelley’s view, it’s too soon to determine if artificial intelligence will be helpful or harmful in the fight against antisemitism, but “this is the moment to invest in this.”

“What we saw with social media was that the point in which the technology companies were investing in addressing the problem, they were doing it after the horse had left the barn in many respects,” he said.

It’s still “early enough” with AI, and many of those working on artificial intelligence safety came from the social media sphere, so “they’re bringing both what they saw happen and their experience to this,” Kelley told JNS.

“We have a real opportunity in this moment to make sure it doesn’t go the same path, for companies to step up, for governments to step up, for civil society to step up,” he said. “It’s time to do the work, find the problems, look for solutions and push for change now.”

Aaron Bandler is an award-winning national reporter at JNS based in Los Angeles. Originally from the San Francisco Bay Area, he worked for nearly eight years at the Jewish Journal, and before that, at the Daily Wire.
“It is disturbing to see some corners of our justice system treat the life of a Jewish American as worth so little,” Alyza Lewin, president of U.S. affairs at the Combat Antisemitism Movement, told JNS.
“We are more scared than ever,” Jewish activist Jennifer Laszlo Mizrahi told JNS. “Despite the overall reduction in the number of instances, the severity of instances is terrifying.”
“I was eventually told by the police that there’s not much that they could do and the case would ultimately get thrown out,” Nir Golan told a public inquiry of the 2023 attack.
The analysis found that Cole Allen, who faces multiple felony charges for the April 25 attack, had “multiple social and political grievances” and cited his social media posts criticizing the war.
A spokesman for the New York City Economic Development Corporation told JNS that a Japan page was also taken down.
The incident occurred as America continues its blockade of the Strait of Hormuz.