The Google Threat Intelligence Group (GTIG) has released a report revealing that threat actors have moved beyond using AI for productivity and are now embedding large language models (LLMs) directly into active cyber operations.
According to the report, “Adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations. This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.”
This shift, GTIG warns, introduces an entirely new class of “just-in-time” AI-powered malware capable of rewriting its own code in real time to evade detection — a development that could dramatically change the cybersecurity landscape.
For the first time, GTIG has documented malware families that use LLMs during execution, including PROMPTFLUX, PROMPTLOCK, and PROMPTSTEAL.
“These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware,” the report explains.
Among the most concerning is PROMPTFLUX, a VBScript-based dropper that uses the Gemini API to rewrite its own source code hourly, effectively allowing it to mutate and evade antivirus signatures.
“PROMPTFLUX prompts the LLM to rewrite its own source code, saving the new, obfuscated version to the Startup folder to establish persistence,” GTIG noted.
The malware’s “Thinking Robot” module communicates with Gemini using a hard-coded API key, instructing the model to produce fresh obfuscated code. The report describes this as an early prototype of metamorphic malware that can evolve dynamically — a hallmark of future autonomous cyber threats.

Although GTIG says PROMPTFLUX is still in the development phase, it calls the discovery “a significant indicator of how malicious operators will augment their campaigns with AI moving forward.”
In June 2025, GTIG observed APT28 (FROZENLAKE) — a Russian government-backed hacking group — deploying PROMPTSTEAL, a data-mining malware that interacts with the Hugging Face API to query the Qwen2.5-Coder-32B-Instruct LLM.
“APT28’s use of PROMPTSTEAL constitutes our first observation of malware querying an LLM deployed in live operations,” GTIG wrote.
The malware uses AI-generated commands to harvest system and document data from Windows hosts:
“PROMPTSTEAL novelly uses LLMs to generate commands for the malware to execute rather than hard-coding the commands directly in the malware itself.”
Masquerading as an “image generation” app, the malware silently collects system information and copies user documents into a hidden directory before exfiltrating them to a command-and-control (C2) server.
This marks a pivotal shift — AI now acts as a co-processor for active malware operations, blurring the line between algorithmic assistance and autonomous decision-making.
Beyond experimental malware, state-backed actors from China, Iran, and North Korea have been actively misusing Google’s Gemini AI models for reconnaissance, phishing, and tooling development.
GTIG reports that a China-nexus threat actor manipulated Gemini by pretending to be a cybersecurity student in a Capture-the-Flag (CTF) competition to bypass AI safeguards.
“When prompted to help in a CTF exercise, Gemini returned helpful information that could be misused to exploit the system,” the report stated.
The actor leveraged this access to refine exploitation scripts, phishing kits, and web shells, effectively using social-engineering tactics against the AI itself.
Iranian actors TEMP.Zagros (MuddyCoast) and APT42 were observed using Gemini for malware development and data analysis.
TEMP.Zagros disguised itself as a “university student” seeking programming help, tricking the model into offering technical advice for its custom malware project. This mistake revealed the actor’s C2 domains and encryption keys, allowing Google to disrupt their campaign.
APT42, meanwhile, attempted to build a “Data Processing Agent” using Gemini to convert natural-language queries into SQL commands to mine personal data — a chilling preview of AI-assisted surveillance.
“The agent converts natural language requests into SQL queries to derive insights from sensitive personal data,” GTIG reported.
North Korean groups UNC1069 (MASAN) and UNC4899 (PUKCHONG) were caught using Gemini to research cryptocurrency wallets, develop stealer scripts, and even craft deepfake profiles for social engineering.
“UNC1069 used Gemini to research cryptocurrency concepts and perform reconnaissance related to wallet application data,” the report detailed.
One campaign used AI-generated deepfake videos impersonating cryptocurrency executives to lure victims into downloading a malicious “Zoom SDK,” which deployed the BIGMACHO backdoor.
GTIG also warns that underground marketplaces for illicit AI tooling have matured significantly throughout 2025.
“We have identified multiple offerings of multifunctional tools designed to support phishing, malware development, and vulnerability research, lowering the barrier to entry for less sophisticated actors.”
These underground AI kits — advertised in English and Russian forums — mimic legitimate SaaS platforms, offering subscription-based tiers, Discord access, and premium features like API integration.
Capabilities range from deepfake generation for phishing to LLM-driven vulnerability scanners, illustrating a future where malware-as-a-service meets AI-as-a-service.
Despite these alarming findings, GTIG emphasized that none of the identified AI-powered malware currently poses a direct, large-scale threat.
Google DeepMind has since strengthened its Gemini classifiers and safety guardrails, preventing similar misuse in future iterations. The company has also disabled assets linked to identified actors and shared intelligence with law enforcement partners.
Still, GTIG cautions that 2025 may be remembered as the year AI began actively fighting on both sides of the cybersecurity battlefield — as both a tool for defenders and a weapon for attackers.
Related Posts:
- Exploitation of URL Rewriting: A New Phishing Paradigm Threatens Cybersecurity
- Google: Zero-Day Exploits Shift from Browsers to Enterprise Security Tools in 2024
- Operation Rewrite: How a Malicious IIS Module Is Hijacking Websites
- AI Notepad: Rewrite Your Text, Windows 11 Gets Smarter
- AI’s Dark Side: Hackers Harnessing ChatGPT and LLMs for Malicious Attacks