Newsletter
Newsletter Support JNS

X exposes Pakistan-based operator spreading AI-generated Iranian war propaganda

Social-media platforms have become incubators for hate speech, often used to incite mob violence and apply the country’s harsh blasphemy laws.

Technology, Coding, Computers
Technology. Credit: Pexels.
Uzay Bulut is a Turkish journalist formerly based in Ankara.

The proliferation of AI-generated propaganda and coordinated disinformation campaigns in defense of terror groups and authoritarian regimes—specifically, the Islamic Republic of Iran, Hamas and entities operating in Pakistan—has reached alarming levels as of early 2026. These efforts are designed to distort reality, manipulate public opinion and hinder counter-terrorism efforts.

Online propaganda inciting hatred against Jews, Israel, Hindus, India and dissident Iranians, among other nations, is a significant, well-documented issue, with many campaigns traced to state-sponsored actors and proxies in countries such as Iran and Pakistan. This digital content—ranging from anti-Hindu hate campaigns in Pakistan to antisemitic and anti-Western narratives disseminated by Iranian state media, as well as Iranian social-media accounts—is increasingly coordinated and designed to incite violence and deepen social polarization.

On March 3, a massive, coordinated campaign involving at least 31 hacked accounts on X (formerly Twitter) was traced to Pakistan. These accounts, named “Iran War Monitor,” posted AI-generated videos of fictional Iranian missile strikes against Israel.

Nikita Bier, X’s head of product, posted that the accounts were hacked and their usernames changed on Feb. 27 to “Iran War Monitor” or variations, saying, ““Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos. All were hacked and the usernames were changed. … We are getting much faster at detecting this—and also eliminating the incentive to do this.”

In response, X is “revising its Creator Revenue Sharing policies to maintain authenticity of content on Timeline and prevent manipulation of the program,” Bier posted.

“During times of war, it is critical that people have access to authentic information on the ground. … Starting now, users who post AI-generated videos of an armed conflict—without adding a disclosure that it was made with AI—will be suspended from Creator Revenue Sharing for 90 days,” he wrote. “Subsequent violations will result in a permanent suspension from the program. This will be flagged to us by any post with a Community Note or if the content contains metadata (or other signals) from generative AI tools. We will continue to refine our policies and products to ensure X can be trusted during these critical moments.”

This is not the only case in which content creators are leveraging AI to generate and spread fake war content, not just for ideological reasons but to game creator revenue-sharing programs. AI-generated content and manipulated social-media campaigns in Pakistan are often used to spread faith-based hatred against non-Muslims, including Hindus, Christians and Jews, creating significant life-threatening risks.

Social-media platforms like Facebook, X, WhatsApp and YouTube have become incubators for hate speech, often used to incite mob violence and apply the country’s harsh blasphemy laws. Coordinated campaigns often utilize fake social media accounts to spread anti-India and anti-Israel narratives and damage the bilateral relations of both nations.

Simultaneously, the Islamic Republic of Iran has consistently used state-controlled media and proxies to disseminate antisemitic, anti-Israel and anti-Western propaganda, aimed at influencing international audiences and targeting Jewish communities.

Iranian state media and allied actors are, as of 2026, conducting a widespread, parallel information war, often utilizing dehumanizing language to justify violence. These networks use AI-generated content, fake social-media channels, and, in some cases, paid influencers to normalize hate and violence against Iranian protesters, as shown in the “Axis of Amplification” report by the Institute for Strategic Dialogue.

AI and other types of propaganda that are being produced and spread on social media in defense of terror groups and tyrannical regimes such as the Islamic Republic of Iran, Pakistan and Hamas is alarming, political analyst Apostolos Pistolas told me:

“The volume is clearly growing rapidly. During the current Iranian-related conflict alone, BBC Verify and other monitors documented AI-generated or manipulated videos and images racking up hundreds of millions of views across platforms. Fake clips claiming Iranian strikes on US ships, burning Gulf cities, or destroyed Israeli installations spread alongside real footage. In a fast-paced environment, it is difficult for people to separate between reality from AI-generated events. They do not have time for that.

“Iranian state media and affiliated networks have ramped up AI use, blending it with traditional propaganda in ways experts compare to Russian tactics in Ukraine. At the same time, opportunistic networks, frequently traced to Pakistan, flood platforms with pro-Iran content for ad revenue or clicks. Similar patterns have appeared for years in support of Hamas, Hezbollah, and anti-Israel narratives, often overlapping with anti-Jewish, anti-Indian/Hindu, or anti-dissident Iranian material. Platforms like X have repeatedly dismantled Iranian-linked influence operations involving thousands of accounts, but the low cost of generative AI has made the problem harder to tackle.”

Many people easily believe such propaganda without questioning its authenticity due to many factors such as emotional manipulation, confirmation bias and echo chambers, low digital literacy and AI awareness, as well as authority illusion, says Pistolas.

“War videos are designed to trigger anger, fear or loyalty. Once the heart is engaged, the head often disengages. Algorithms feed users content that matches their existing worldview. If someone already distrusts Israel, India or ‘the West,’ pro-regime clips feel like validation,” he wrote. “Many users (especially in regions with heavy state media influence) cannot spot unnatural motion, inconsistent lighting, or missing metadata. Generative AI has made deepfakes cheaper and more convincing than ever, and therefore, more difficult to detect.”

He said that “content from accounts styled as ‘War Monitor’ or mimicking official Iranian sources looks legitimate at a glance. Reposts from friends or influencers add social proof.”

Serious consequences will likely emerge if such propaganda is not challenged and exposed, Pistolas added. “Such propaganda will likely lead to increased real-world violence and radicalization. Content that glorifies attacks or dehumanizes a certain group of people has historically correlated with spikes in hate crimes.”

“Following the Oct. 7, 2023, massacre, antisemitic incidents surged globally alongside similar online narratives; the same type of risk applies here,” Pistolas continued. “It might lead to policy and conflict escalation: Exaggerated casualty claims can pressure governments, mislead the public during crises, or prolong wars by inflating perceived successes on one side. Another consequence could be societal division and loss of trust: When millions see convincing fakes presented as news, cynicism spreads, making it harder for accurate reporting (including from the ground in conflict zones) to break through. Targeted harassment and suppression will likely occur as well: Accounts amplifying hate often intimidate journalists, activists and minorities.”

In this digital age, jihadis no longer need to fly halfway across the world to join their favorite cause—be it in Kashmir, Gaza, Iran or elsewhere. They can be a jihadi from behind their screen, contributing to the violence through propaganda or cyber-attacks, like the pro-ayatollah accounts, Hamas supporters and Pakistani content creators who are increasingly using generative AI to create fabricated images and videos of airstrikes, often to exaggerate damage or justify violence.

Such dehumanizing hatred and false narratives targeting certain nations do not stay solely digital and often have real-life consequences, such as physical violence against targeted groups or manipulations of state policies. Social-media platforms should take precautions against this media jihad to counter its potentially destructive consequences.

The measure has drawn opposition from civil-liberties groups, including the state’s ACLU.

Israel Airports Authority confirmed that the planes were empty and no injuries were reported.

The victims suffered light blast wounds and were listed in good condition at Beilinson Hospital.
The IDF said that the the Al-Amana Fuel Company sites generate millions of dollars a year for the Iranian-backed terror group.
A U.S.-China Economic and Security Review Commission fact sheet says that the two countries are working to “undermine the U.S.-led global order.”
“Opining on world affairs is not the job of a teachers’ union,” said Mika Hackner, director of research at the North American Values Institute.