Hatred amplifies as social-media platforms fail to enforce community guidelines

Right-wing extremist rhetoric continues to reach receptive Americans.

Social-media app icons on a smartphone screen. Credit: Thomas Ulrich/Pixabay.
Social-media app icons on a smartphone screen. Credit: Thomas Ulrich/Pixabay.

Jews find themselves to be the easy target for hate as disinformation and conspiracy theories run rampant on social media and in daily conversation. Two new reports add to the growing evidence of widespread hatred on social-media platforms. Hate respects no boundaries, and once lit, becomes a raging wildfire.

Two 14-year-old users with new accounts on Facebook, Instagram and Twitter (now called X) were bombarded by Nazi propaganda, Holocaust denial and white supremacist symbols. “Suggested for You” recommendations on Instagram included accounts equating Judaism with Satanism, denying Nazi involvement in the Holocaust and promoting blood libels, such as “Jews when they’re about to eat a Christian baby.”

This disturbing imagery comes from two new reports published by the ADL and the Tech Transparency Project. Researchers in the first report, “Algorithmic Amplification of Antisemitism and Extremism,” created six personas, including the 14-year-olds. Instagram, Facebook and Twitter recommended explicitly antisemitic content to test personas of different ages and genders who searched for or looked at conspiracy theories and related topics. Much of the content violated the platforms’ own hate-speech policies. YouTube was an outlier; it did not recommend anti-Jewish content. All of these social-media companies have policies that prohibit hate speech against minority groups, including Jews.

The second report, “Auto-generating & Autocompleting Hate,” exposed the proliferation of hate groups and the promotion of their content. Facebook, Instagram and YouTube were found to be hosting dozens of hate groups and movements on their platforms, many of which violate the companies’ own policies, but were easy to find via a quick search.

The ADL concluded with three recommendations: Tech companies need to fix the product features that currently escalate antisemitism, and autogenerate hate and extremism; Congress must update Section 230 of the Communications Decency Act to fit the reality of today’s internet; and more transparency on search engine recommendations is needed.

Failing policies and guidelines spark backlash

Social-media companies have a shameful track record of not following their own company policies and community guidelines. However, in some instances, they have blocked or removed content following a public outcry. An ADL report from April specifically looked at Holocaust denial policies. Policy reviews were scored, with YouTube earning a C+, Facebook/Instagram a C- and Twitter a D-.

Recently, pharmaceutical company Gilead and the Internet and Television Association suspended their ad spending on Twitter following a report showing ads for major brands being shown on an account praising Hitler and the Nazis. Other brands victimized included Amazon, the Atlanta Falcons, Office Depot, Samsung and Sports Illustrated. Twitter also recently allowed a Community Note linking to white supremacist websites and falsely stating that Leo Frank—the victim of an anti-Jewish lynching—was guilty of the rape and murder of a young girl.

One rare example of successful policy enforcement was during the 2020-21 academic year. San Francisco State University hosted unrepentant Palestinian terrorist Leila Khaled. She hijacked a TWA flight to Tel Aviv in 1969 on behalf of the Popular Front for the Liberation of Palestine and attempted to hijack an El Al flight in 1970. Zoom and Facebook removed links from their platforms, stating that the Khaled program violated company policies. Zoom specifically cited its “commitment to anti-terrorism laws.”

Social media fuels attacks

Recent history shows that hate online can lead to offline tragedy. Right-wing extremist rhetoric continues to reach receptive Americans. White supremacist conspiracy theories were cited by each terrorist responsible for killing 11 Jews in the Tree of Life Synagogue in Pittsburgh and 23 people in the Walmart shooting in El Paso, Texas, that targeted Latinos. Similar sentiments were shared by the 18-year-old white supremacist who singled out African-Americans in an attack in Buffalo, N.Y., that killed 10 people.

Points to consider:

  1. Hate, lies and conspiracy theories spread easily on social media.

The speed and reach of these platforms enable hateful content to quickly gain traction, potentially influencing a wide audience. The algorithms employed by these platforms can amplify extreme viewpoints and hate speech—creating echo chambers that validate and embolden users. This can push certain individuals to take dangerous actions based on these views. The rapid spread of harmful content and misinformation not only fosters mistrust and division but can also have real-world consequences, from incitement to violence and even acts of terrorism. The lack of proper fact-checking mechanisms and the promotion of sensationalist material further exacerbates this problem.

  1. Private companies must enforce their own rules and remove harmful content.

Social-media corporations freely create their own rules and have a responsibility to uphold the guidelines they establish for their platforms. Effective enforcement of rules, especially those aimed at curbing hate speech and violent content, is vital to prevent the spread of false narratives. Companies should invest in identifying and removing harmful content swiftly rather than favoring profits over people. A consistent and transparent approach to moderation builds trust with users, fostering an environment where diverse voices can be heard without fear of harassment or harm. While navigating the complexities of free expression, companies should recognize that their platforms wield significant influence, and responsible enforcement is crucial for preventing the amplification of hatred and extremist ideologies.

  1. Social-media users should report hateful content whenever they see it.

Actively empowering participation from social media users in reporting hateful content is a crucial step in maintaining a safe online environment. Given the vast scale of these platforms, relying solely on trained moderators and automated systems is challenging. Reporting mechanisms provide an essential opportunity for users to flag instances of hate speech, discrimination and harmful ideologies that might otherwise go unnoticed. Companies must streamline the reporting process, ensuring ease of use and clear instructions, while also prioritizing the investigation and removal of flagged content. This united effort contributes to the elimination of toxic content from these platforms.

  1. Education promotes understanding and counters hate.

Education stands as a potent tool in combating hate and fostering social understanding. Providing individuals with knowledge, critical thinking skills and a broader perspective, education can act as a barrier against the spread of hate and intolerance. It empowers people to question stereotypes, confront biases and engage in meaningful conversations that bridge divides. Schools and social-media platforms must promote digital literacy to help users discern reliable information from toxic narratives. By investing in education that promotes empathy, cultural awareness and open dialogue, future generations can be more resilient against the allure of hate and be better equipped to create a peaceful future.

Read more here

You have read 3 articles this month.
Register to receive full access to JNS.
About & contact The Publisher
The Focus Project is a consensus initiative of major American Jewish organizations that provides crucial news, talking points and background content about issues affecting Israel and the Jewish people, including antisemitism, anti-Zionism and relevant events in the Middle East. Click here to receive weekly Talking Points from The Focus Project.
Releases published on the JNS Wire are communicated and paid for by third parties. Jewish News Syndicate, and any of its distribution partners, take zero responsibility for the accuracy of any content published in any press release. All the statements, opinions, figures in text or multimedia including photos or videos included in each release are presented solely by the sponsoring organization, and in no way reflect the views or recommendation of Jewish News Syndicate or any of its partners. If you believe any of the content in a release published on JNS Wire is offensive or abusive, please report a release.
Thank you. You are a loyal JNS Reader.
You have read more than 10 articles this month.
Please register for full access to continue reading and post comments.
Never miss a thing
Get the best stories faster with JNS breaking news updates