
Meta has taken down three covert influence campaigns (CIBs) originating from China, Iran, and Romania, according to its latest Adversarial Threat Report for Q1 2025. The coordinated efforts—designed to manipulate public discourse using fake accounts, AI-generated personas, and deceptive narratives—were dismantled before they gained significant traction on Meta’s platforms.
“We detected and removed these campaigns before they were able to build authentic audiences on our apps,” Meta stated, emphasizing its proactive posture in tackling adversarial threats.
- China-Based Network: Targeting Asia with AI-Generated Identities
Meta dismantled a Chinese-origin operation that used 157 fake Facebook accounts, 19 Pages, and 17 Instagram profiles to target audiences in Myanmar, Taiwan, and Japan. The campaign involved multiple account clusters impersonating locals and posting in various languages including English, Burmese, Mandarin, and Japanese.
“Some of these accounts used profile photos likely created using artificial intelligence,” the report noted, referring to synthetic media tactics often employed to evade detection.
The campaign pushed politically charged narratives, such as:
- In Myanmar: Criticism of civil resistance and support for the military junta.
- In Japan: Anti-government rhetoric and opposition to U.S. military ties.
- In Taiwan: Allegations of political and military corruption.
Meta found connections between this operation and two previously removed Chinese networks in 2022 and 2024, indicating persistent and evolving tactics.
- Iran-Based Network: Amplifying Anti-Western Sentiment
A covert influence operation from Iran was uncovered targeting Azeri-speaking communities in Azerbaijan and Turkey. Meta removed 17 Facebook accounts, 22 Pages, and 21 Instagram accounts linked to the network, which also had a presence on X (Twitter), YouTube, and dedicated websites like israelboycottvoice[.]com.
“Many of these accounts posed as female journalists and pro-Palestine activists,” Meta revealed, with posts covering the Paris Olympics, Gaza conflict, and calls to boycott U.S. brands.
The operation’s content strategy included leveraging trending hashtags like #palestine, #gaza, and #starbucks to infiltrate public discourse. Meta credited a tip-off from Google’s Threat Intelligence Group and noted links to STORM-2035, a campaign previously documented by OpenAI and Microsoft.
- Romania-Based Network: Domestic Disinformation and High Ad Spend
In Romania, Meta removed 658 Facebook accounts, 14 Pages, and two Instagram accounts engaged in local influence operations. These accounts used fabricated identities to comment on political posts and drive traffic to off-platform websites. The network invested heavily in outreach, with $177,000 spent on ads, mostly in U.S. dollars.
“The majority of these comments received no engagement from authentic audiences,” the report states, highlighting the operation’s failure to gain real influence.
To bolster credibility, the network operated across YouTube, X (Twitter), and TikTok, posing as locals discussing everyday topics like sports, travel, and local news. Notably, the campaign employed sophisticated operational security (OpSec), including proxy IPs to obscure its origin.
Since 2017, Meta has expanded its scope beyond Russian disinformation to tackle threats globally. The company now integrates insights from industry partners and open-source intelligence and publishes associated indicators on GitHub to aid the broader security community.
“We focus on behavior, not content — no matter what they post or whether they’re foreign or domestic,” Meta emphasizes in its policy on CIB enforcement.
Related Posts:
- OpenAI Purges ChatGPT Accounts: China and North Korea Weaponizing AI for Propaganda
- Discourse file upload bug could lead to RCE attacks
- Alphabet Q1 2025: Revenue Surges, Gemini 2.5 Launched
- Russian hackers stole 860,000 euros from Raiffeisen Romanian bank in one night
- NVIDIA Q1 Revenue Soars to $44.1 Billion Amid AI Boom, Blackwell Adoption