The Genians Security Center (GSC) has uncovered a new spear-phishing campaign by the North Korean threat group Kimsuky, marking a troubling evolution in the use of artificial intelligence for cyberattacks. Detected on July 17, 2025, the attack impersonated a South Korean defense-related institution, masquerading as if it were managing official ID issuance tasks for military-affiliated personnel.
According to GSC, “the threat actor used ChatGPT, a generative AI, to produce sample ID card images, which were then leveraged in the attack. This is a real case demonstrating the Kimsuky group’s application of deepfake technology.”
Deepfakes, once seen primarily as tools for disinformation, are now being weaponized for state-sponsored cyber operations. The GSC notes that these AI-generated ID card images were embedded within phishing emails disguised as draft review requests for military employee IDs. Victims who downloaded the attachments unknowingly executed obfuscated PowerShell commands, establishing communication with a command-and-control (C2) server at jiwooeng.co[.]kr.
The attached .zip archive included a malicious shortcut file, which decoded into a PowerShell backdoor. Alongside it, a PNG file—a counterfeit government ID—was analyzed and flagged as 98% likely to be a deepfake by the TruthScan detection service.

This campaign builds on earlier tactics observed by GSC. In June 2025, the group deployed the so-called ClickFix tactic, disguising their malware as CAPTCHA security windows. In both cases, the same malware strain and C2 infrastructure were identified. As GSC explains, “this correlation study helps in understanding the present case of AI deepfake-based forgery of South Korean military agency ID cards.”
The report also draws connections with North Korean IT workers misusing AI in global operations, as documented by Anthropic in August 2025. These operatives allegedly used generative AI to craft fake resumes and technical identities to secure employment abroad, ultimately funneling revenue back to Pyongyang.
The attack was meticulously engineered:
- Malicious LNK files initiated PowerShell execution with strings obfuscated through slicing techniques.
- A batch file (LhUdPC3G.bat) downloaded secondary payloads disguised as legitimate Hancom Office updates.
- The malware leveraged AutoIt scripting with an enhanced variant of the Vigenere cipher for string obfuscation, complicating analysis.
- Persistence was maintained via the Windows Task Scheduler, disguising the process as an update service.
The use of deepfakes in phishing campaigns introduces a dangerous precedent for state-sponsored espionage. As GSC warns, “while AI services are powerful tools for enhancing productivity, they also represent potential risks when misused as cyber threats at the level of national security.”
South Korea’s Ministry of Foreign Affairs has echoed similar concerns, cautioning that hiring or outsourcing to North Korean IT workers carries risks ranging from IP theft to reputational damage.
he Kimsuky group’s integration of deepfake-generated ID cards into spear-phishing operations signals a new frontier in AI-enabled cyber warfare. What began as simple document forgery has escalated into the weaponization of generative AI, creating convincing decoys that threaten both organizational and national security.
As the GSC concludes, organizations must “proactively prepare for the possibility of AI misuse and maintain continuous security monitoring across recruitment, operations, and business processes.”
Related Posts:
- ClickFix Unmasked: How North Korea’s Kimsuky Group Turned PowerShell into a Weapon of Psychological Deception
- Meta Unveils “Frontier AI Framework” to Address High-Risk AI
- Beyond Phishing: How AI and Deepfakes Are Powering a New Generation of Scams
- Deepfake Scams on the Rise: CEOs, News Anchors, and Government Officials Impersonated
- Perplexity AI Launches “Max” Tier: $200/Month for Unlimited AI Tools & Frontier Model Access