Since Oct. 7, 2023, there has been a vast increase in antisemitism across the world, which may start in obscure corners of the internet but eventually overflows into real life.
Technology has expanded the reach and ability to amplify tropes, spread fake news and generate artificial content that can direct hate and abuse at Jews and create politicized radicalization pipelines. To combat this problem, Jewish-led startups are treating the digital space as an additional security frontier by creating tools that can protect from algorithmic, systemic and human prejudice.
One company is CyberWell, a nonprofit that has developed a 360-degree online antisemitism compliance solution to empower social media platforms to enforce their digital policies more effectively.
“The opportunity here is to look at the whole world of trust and safety and digital policies,” said Tal-Or Cohen Montemayor, its founder and executive director. “One of the ways that you can make a scalable impact against this phenomenon is by, first of all, really understanding the rules and also making sure that those rules are enforced. Technology presents a unique opportunity.”
According to the Anti-Defamation League’s data, antisemitic incidents in the United States rose from roughly 2,000 in 2020 to 9,300 in 2024, marking a 344% increase over the past five years and an 893% increase over the past decade. This rise reflects real-life effects: Innocent Jews are murdered in the streets of the United States, accosted in the cities of Europe, and widespread antisemitism is becoming systemic among influential far-left and far-right online communities.
Today, memes and coded language can bypass algorithms, making antisemitism monitoring a full-time job. CyberWell combats this by employing a full-time team of analysts and using AI to detect—and flag—antisemitic content that meets the IHRA’s working definition and is in line with the digital policy of social media platforms. “Technology is code. … The whole world of technology is an evolving space, and we are actively as a society writing the rules of what the rules of social media are, and what the rules of AI technologies are.”
Beyond chatrooms and comment sections
Another company, OpenWeb, is on a mission to build “a more open, healthier web” by creating a product that filters hateful speech in online comment sections. Users may be familiar with its work: The Israeli company has helped more than 5,000 online publishers host a total of more than 150 million monthly active users. Its LLM-based moderation algorithm, Aida, recognizes language and context clues to spot when language gets violent or turns against Jews or other minorities.
But today’s battle against antisemitism goes far beyond chatrooms and comment sections; it permeates social media feeds and is fueled by generative artificial intelligence—a tool so powerful that the challenge of removing the spread of antisemitic content can seem insurmountable.
“The rise or the use of generative AI tools for songs and videos is really exacerbating this vulnerability of coded language that’s meant to evade trust and safety mechanisms and policies,” explained Cohen Montemayor. “It is absolutely a vulnerability that antisemites and extremists are exacerbating right now in this space.”
Antisemitism is not a static set of stereotypes, but “a highly adaptive discourse that restructures itself according to the political-cultural opportunity structures of the digital age,” added Matthias J. Becker, a linguist and senior fellow at the Tel Aviv Institute specializing in pragmatics, cognitive linguistics, discourse analysis and social media studies with a focus on prejudice and hate.
“It spreads much like a viral system: mutating, exploiting vulnerabilities and thriving in environments shaped by ambiguity, anonymity, algorithmic amplification and the erosion of traditional gatekeepers,” he said.
CyberWell identified that the public is using Google’s Veo 3 image generator to create false images of Orthodox men showing them claim that various possessions or situations were “promised to them 3,000 years ago.” This new trend reinforces stereotypes about Jewish character and appearance while also delegitimizing the Jewish connection to the Land of Israel. And while Google’s official policy on GenAI states that “it allows for the creation of fictional characters, if the Prohibited Use Policy is followed: this policy aims to prevent the generation of harmful, illegal, or misleading content.”
Google claims this approach combats content that promotes hate speech, violence, harassment or sexually explicit themes, but the policy statement does not contain specific clauses to allow the banning of stereotypical depictions.
“Technology is actually a new front when it comes to fighting antisemitism,” said Chen Shmilo, CEO of 8200 Alumni Association and a council member of the Voice of the People. “People talk about social media, but we have to understand that the same way we need to fight it on the legal aspect, diplomacy, advocacy and education, technology is also a big part of the story. It doesn’t necessarily create antisemitism, but it is the amplifier of the antisemitic voice,” he added.
Voice of the People is a global initiative led by Israeli President Isaac Herzog that unites Jewish leaders to identify and address long-term challenges faced by the Jewish people. As part of the Counter-Antisemitism group, Shmilo helps develop a response to rising Jew-hatred through innovation and cross-sector collaboration.
“We are in a battle of truth versus lies,” he said. “So many things are not clear-cut …, but in many cases, it’s either [that] something happened or didn’t happen. And then evil forces use this fight: Now they have additional technological tools to execute their plan. It’s truth versus lies, but also acknowledging that we need to keep freedom of speech, [which] in many cases, also allows you to lie.”
Free speech vs. weaponized tech
One of the chief concerns about tracking antisemitism online is how critics will be quick to cite free speech protections. While social media platforms do not need to abide by speech protection laws in the United States, many are careful to walk a line between free speech and paid, algorithmically boosted speech that may cause harm.
For 25 years, HonestReporting has fought to ensure “truth, integrity and fairness” in media. The NGO uses AI systems to identify biases and correct prejudices against Israel found in traditional media and their social media channels.
“Freedom of speech is not freedom from the consequences of speech,” said HonestReporting Global CEO Jacki Alexander, a former regional director of operations for AIPAC. “There are also important guardrails placed around freedom of speech: You can’t yell fire in a crowded theater [if the intention is to cause a panic], and your right to swing your fist ends where my nose begins. Historically, that has meant that imminent threats are not protected, and online hate speech has become an imminent threat to Jewish safety.”
Those who support absolute freedom of speech often cite Section 230 of the U.S. Communications Decency Act (CDA) of 1996, which was passed in the internet’s early days in an attempt to provide legal protection to online service providers. The law states that platforms are protected from liability for third-party content on their platforms.
Predicting where this law would lead us 30 years later proved to be an impossibility, yet it is a vital piece of legislation for companies that can attempt to curb “indecent and obscene material” without being legally obligated to adhere to truth, safety, or editorial standards.
Drawing on the recent murders of Jews in places from Colorado to Washington, D.C., Alexander warned that while technology makes information easily accessible, it fails to teach users how to evaluate it, fueling political tension and enabling radicalization. Alongside its AI tools that can identify bias in language and subtle context clues, HonestReporting embarks on training initiatives to tackle online antisemitism. This way, technology can be another tool in the fight, not as censorship, but through education without violating policies that are in place to protect speech and expression.
“People believe they are getting information when they’re actually getting endorphins. Content masquerading as educational is often incomplete, incorrect or taken completely out of context,” Alexander added. “Now we need to course correct. Just as technology created the problems, it can help create the solutions too. Misinformation will always have a head start online, but we’re getting increasingly fast at using technology to identify it, debunk it and educate people on how to spot it themselves.”
The last few years have seen an explosion in technology and a paradigm shift in attitudes toward Jews across left and right, resulting in a slew of attacks that require an assertive effort at prevention. The October 7 attack was live-streamed and celebrated by antisemites, and indifference has been shown to repeated local incidents that threaten Jewish communities.
A conservative voice
The issue of “Israel” among podcasters and commentators is creating a wedge across online discourse: Both sides have their reasons to amplify their voices in the conversation, but after two years of antisemites exploiting the war in Gaza, Jews worldwide are feeling the consequences of online bravado overflowing into the real world, with algorithms designed to reward those with bad intentions.
One such example is the recent fallout of the Charlie Kirk assassination. Condemned by the right and celebrated by some on the left, it shows how political extremism and online radicalization can lead to murder if left unchecked.
“This online celebration of these attacks against Jews is very similar to Charlie Kirk,” said Cohen Montemayor, referencing the political assassination of a conservative voice in the podcast ecosystem who frequently defended Israel and Jews. “Platforms need to invest in an effective and timely response to removing celebration and calls for violence when violent attacks occur. … it was interesting to see the fallout of the disgust and the horror at the celebration of Charlie Kirk’s assassination, because as somebody who’s studying online antisemitism since October 7, I’m thinking, ‘This is exactly what the Jewish community has been going through for the last two years.’”
The hope for Jews as they adopt new ways to battle an ancient hatred is that the technology to combat antisemitism develops faster than the technology used to spread it. And that when that happens, policy and education can follow so that millions of people around the world are spared from the dangers that can come from it. The question isn’t whether we can code ourselves out of hate, but rather if the algorithms will always run ahead of us.