Newsletter
Newsletter Support JNS

Antisemitism on YouTube led to monetizing hate content, report finds

While most of the content examined by CyberWell violated the platform’s own guidelines, only a fraction—less than 11%—was removed after being reported by users.

Desk, Computer, Video, Monitor
Video on a desktop computer monitor. Credit: MerandaDevan/Pixabay.

As YouTube marked its 20th anniversary on Feb. 14, a new report has shed light on the social-media platform’s failures to enforce “Advertiser-Friendly Content” policies that have effectively monetized antisemitic content on the platform.

Research undertaken by CyberWell, a tech nonprofit focused on monitoring and combating the spread of antisemitism, in addition to Holocaust denial and distortion online, found major gaps in the enforcement of YouTube’s policies, revealing that 24% of verified anti-Jewish videos in English were monetized with ads.

The research examined hundreds of AI-flagged videos containing blatant Jew-hatred, as well as antisemitic tropes and conspiracy theories posted in English and Arabic. Conducted over the second half of 2024 and delivered to YouTube alongside enforcement recommendations, the study focused on videos posted between October 2023 and 2024.

Tal-Or Cohen Montemayor
Tal-Or Cohen Montemayor. Credit: Courtesy,

“As YouTube celebrates its milestone anniversary, it must take greater responsibility for the content that appears on its site and take immediate action to curb the spread of antisemitism and bar the monetization of hate and harmful content,” said Tal-Or Cohen Montemayor, founder and executive director of CyberWell.

CyberWell uses AI technology to monitor posts consistent with the International Holocaust Remembrance Alliance’s (IHRA) working definition of antisemitism. Each post is individually vetted by the nonprofit’s analysts and submitted to social-media platform moderators alongside relevant community guidelines and hate-speech policies the individual post violates (sometimes referred to as “Trust and Safety”).

CyberWell’s findings show that 24% of the English-language videos analyzed and a concerning 36% of the Arabic-language videos were monetized with ads. This monetization represents a direct financial incentive for YouTube’s parent company, Google and YouTube creators to facilitate the production and amplification of hate content.

While most of the content examined violated YouTube’s guidelines, even its own hate-speech policy, only a fraction—less than 11%—was removed after being reported by users. (According to YouTube’s “Advertiser-Friendly Content” policy, content that disparages, humiliates or incites hatred against individuals or groups based on their race, religion or ethnicity is not eligible for monetization.)

This falls well below YouTube’s average removal rate for online antisemitism, which CyberWell documented as 32.1% in its 2024 annual report. The study further illustrated significant gaps in YouTube’s enforcement of its own policies in detecting and removing antisemitic videos that violate the company’s community guidelines.

Content creators often circumvented YouTube’s automated detection systems, which rely heavily on voice-recognition technology, by using techniques like overlaying text or using visual aids to avoid detection. Some users merely posted disclaimers claiming that their content is not affiliated with hate groups before posting anti-Jewish content.

Cohen Montemayor said “YouTube’s algorithms appear ill-equipped to recognize the full range of antisemitic rhetoric, particularly when expressed through images or subtle language. The lack of precision in identifying the full range of Jew-hatred has led to worrying gaps in the enforcement of its policies against antisemitic content and in a failure to protect brands on their platform.”

In response to the findings, CyberWell issued several recommendations. Among them are calls to rigorously enforce the platform’s hate-speech policy; improve detection of religious antisemitism; and implement stricter safeguards on monetized content. The report also suggests that YouTube consider adopting new methods of detecting antisemitic rhetoric in video thumbnails, images and written disclaimers, which have been employed to bypass the platform’s automated systems.

About & contact the publisher
CyberWell is an independent, international, technology-rooted nonprofit combating the spread of antisemitism online. Its AI technologies monitor social media in English and Arabic for posts that promote antisemitism, Holocaust denial and violence against Jews. Analysts review and report content to platform moderators while indexing verified posts in the first-ever open database of antisemitic social-media posts, democratically cataloging it for transparency. Through partnerships, education and real-time alerts, it holds social-media platforms and their moderators accountable, promoting proactive steps against online Jew-hatred.
The bill is expected to go to the Knesset plenum for its final two readings later on Monday.
An operative in the Anatolian nation directed a cell in Samaria and recruited two locals for terrorist activity, the Shin Bet said.
The suspects used umbrellas to conceal their vandalism from street cameras, according to police.
According to an indictment filed against two Israelis, the Air Force officer’ exposure to classified information earned them a total of $152,331 on Polymarket.
Two soldiers were severely wounded by an anti-tank missile and three others were hurt by a separate attack.
“Even in times of war, we care for all the citizens of Israel without exception,” says Finance Minister Bezalel Smotrich.