
A new Google Threat Intelligence Group (GTIG) report titled “Adversarial Misuse of Generative AI” provides a detailed analysis of how nation-state cyber actors are experimenting with AI tools, particularly Google’s Gemini, in their offensive cyber operations. Iranian, Chinese, North Korean, and Russian-backed Advanced Persistent Threat (APT) groups have been observed leveraging AI to aid reconnaissance, malware development, and influence campaigns.
Despite concerns that AI could be a game-changer for cybercrime, the report concludes that AI is not yet enabling novel attack capabilities but is helping threat actors move faster and automate some processes.
“While AI can be a useful tool for threat actors, it is not yet the gamechanger it is sometimes portrayed to be. While we do see threat actors using generative AI to perform common tasks like troubleshooting, research, and content generation, we do not see indications of them developing novel capabilities.”
Iranian APT groups, particularly APT42, have been the heaviest users of Gemini, leveraging AI for:
🔹 Reconnaissance – Researching defense organizations and cybersecurity companies.
🔹 Phishing Campaigns – Crafting convincing phishing emails and tailoring content for US defense targets.
🔹 Vulnerability Research – Investigating publicly known vulnerabilities in Atlassian, MikroTik, and Apereo software.
“APT42 used the text generation and editing capabilities of Gemini to craft material for phishing campaigns, including generating content with cybersecurity themes and tailoring the output to a US defense organization.”
Additionally, Iranian cyber actors experimented with AI-assisted malware development, testing red teaming techniques to see how AI could support offensive cybersecurity operations.
Chinese APT actors used Gemini for:
🕵️ Reconnaissance – Researching US military operations, defense contractors, and intelligence personnel databases.
🛠 Malware Development – Converting existing infostealer malware into Node.js, automating Active Directory attacks, and developing Chrome extensions to bypass security controls.
🎭 Deception – Generating fake company profiles and social engineering materials.
“Chinese APT actors used Gemini to conduct reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks.”
Google observed that Chinese hackers frequently used AI to refine intrusion techniques, including privilege escalation and lateral movement within compromised networks.
North Korean cyber groups, including APT43, used AI tools to:
💰 Target Financial Institutions – Researching cryptocurrency platforms and financial networks.
📧 Enhance Phishing Attacks – Developing convincing job applications and cover letters to place North Korean IT workers inside Western companies.
🛠 Develop Malware and Evasion Techniques – Writing C++ webcam recording malware, scripting sandbox evasion tactics, and learning how to bypass Google Voice restrictions.
“North Korean APT actors used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and assistance with malicious scripting and evasion techniques.”
North Korean hackers have a long-standing interest in AI-generated phishing lures and deepfake technology, which could be used to support financial fraud and espionage operations.
Unlike other nations, Russian APT groups had limited engagement with Gemini, possibly due to operational security concerns about using a Western-controlled AI model.
However, Google found that Russian actors:
🔹 Used Gemini for malware reengineering, translating existing malware into different programming languages.
🔹 Added AES encryption to existing attack tools.
🔹 Researched how to automate social media disinformation campaigns.
“Russian APT actors had limited use of Gemini, with most usage focused on converting publicly available malware into another coding language and adding encryption functions to existing code.”
Given that Russia develops its own AI models, it is likely that Moscow-based cyber units are using domestic AI systems rather than publicly available Western platforms.
Beyond cyberattacks, Google also detected state-backed disinformation campaigns leveraging AI. Information Operations (IO) actors used Gemini for:
📰 Generating propaganda – Writing articles on political topics, rewriting news headlines, and tailoring bias-heavy narratives.
📢 Enhancing social media influence – Researching SEO techniques, crafting viral social media campaigns, and increasing engagement.
🗣 Translation and Localization – Adapting influence operations for different languages and regions.
The Iranian group DRAGONBRIDGE was the most prolific in AI-assisted information operations, using Gemini to generate anti-Western narratives and tailor them for different audiences.
“Iranian IO actors accounted for three quarters of all use by IO actors. They used Gemini for content creation and manipulation, including generating articles, rewriting text with a specific tone, and optimizing it for better reach.”
Despite fears of AI-generated cyberweapons, the report found no evidence that Gemini was successfully used for:
🚫 Creating new malware from scratch.
🚫 Writing exploits for unknown vulnerabilities.
🚫 Developing fully autonomous cyberattack capabilities.
Threat actors tried to bypass Gemini’s safeguards with basic jailbreak prompts, but Google’s defenses held up.
“Threat actors attempted unsuccessfully to use Gemini to enable abuse of Google products, including researching techniques for Gmail phishing, stealing data, coding a Chrome infostealer, and bypassing Google’s account verification methods.”
The misuse of AI in cybercrime is growing, but Google’s report confirms that threat actors are not yet using AI in groundbreaking ways. Instead, APT groups are leveraging AI as an efficiency tool, similar to how they use Metasploit or Cobalt Strike.
However, as AI models continue to evolve, Google expects threat actors to refine their tactics, which could lead to more advanced AI-driven cyber operations in the future.
“Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume. However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors.”
Related Posts:
- From Victim Profiles to Data Leaks: Inside the Lynx Ransomware-as-a-Service Ecosystem
- ABB Advisory Warns of CVE-2024-48841: RCE Threat with CVSS 10.0 Severity
- WhatsApp Phishing Campaign Targets SBI Bank Users with Malicious App