
As artificial intelligence tools rise in popularity, so too does their abuse by cybercriminals. A recent investigation by Mandiant Threat Defense reveals a sprawling campaign by the threat actor UNC6032, who is leveraging fake AI video generator websites to deliver a potent cocktail of malware.
Masquerading as AI services like Luma AI, Canva Dream Lab, and Kling AI, the attackers ran thousands of malicious social media ads on platforms like Facebook and LinkedIn, targeting users across the globe.
“Mandiant Threat Defense has identified thousands of UNC6032-linked ads that have collectively reached millions of users…,” the report states.
These ads redirected users to fraudulent websites, offering AI-powered video generation tools. But instead of dynamic content, users received malware-infected ZIP archives with obfuscated .exe files disguised as video files, using special Unicode characters to conceal their true extensions.
The main payload, dubbed STARKVEIL, is a Rust-based dropper designed to deploy multiple modular malware strains:
- GRIMPULL – a downloader using Tor tunnels for secure C2 communication and armed with anti-VM and sandbox evasion tactics.
- XWORM – a backdoor with capabilities for keylogging, command execution, screen capture, and USB spread.
- FROSTRIFT – a stealthy data collector focused on stealing browser extensions, crypto wallets, and personal credentials.
“The malware makes extensive use of DLL side-loading, in-memory droppers, and process injection…,” the report explains.
Each component uses DLL side-loading, process hollowing, and AutoRun registry persistence to silently entrench itself in the host system. Affected users may see fake error windows, simulating a file corruption, coaxing them to retry execution and unknowingly trigger infection.

Mandiant’s analysis of Meta’s Ad Library uncovered over 2.3 million estimated ad views in the EU alone, with ads hosted by both attacker-created and compromised Facebook accounts. On LinkedIn, UNC6032’s ads generated 50,000–250,000 impressions, mostly targeting the U.S., Europe, and Australia.
“Their actions have fueled a massive and rapidly expanding campaign centered on fraudulent websites masquerading as cutting-edge AI tools.”
The attackers constantly rotate domains, deploying new ads daily to stay ahead of detection systems. Payloads hosted on fake AI domains like lumalabsai[.]in are updated regularly, serving different obfuscated versions with the same functionality.
Once deployed, the malware extracts login credentials, cookies, credit card data, and even Facebook account details—transmitting the stolen information through the Telegram API. Reconnaissance efforts include AV detection, OS identification, and user role analysis.
“This campaign has been active since at least mid-2024 and has impacted victims across different geographies and industries.”
The use of Telegram tokens and chat IDs in malware configurations makes exfiltration immediate and difficult to trace. The attackers maintain persistent access through registry values and stealthy plugin systems.
Mandiant advises extreme caution when engaging with AI tools—especially those promoted via ads. Verify domains, inspect downloads, and maintain up-to-date endpoint protections to defend against this evolving threat.