A critical vulnerability was found in LangChain, the popular open-source framework used to power Large Language Model (LLM) agents. The flaw, tracked as CVE-2025-68664, carries a severe CVSS score of 9.3, allowing attackers to potentially extract sensitive environment variables or trigger unintended system actions.
The vulnerability stems from how LangChain handles data serialization—the process of converting complex objects into a format that can be stored or transmitted. Due to a failure to properly escape specific dictionary keys, malicious data can be disguised as legitimate LangChain objects.
The core of the issue lies in LangChain’s dumps() and dumpd() functions. These utilities are supposed to safely serialize data, but researchers found they failed to escape dictionaries containing a specific key: “lc”.
“The ‘lc’ key is used internally by LangChain to mark serialized objects,” the advisory explains. “When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data”.
This seemingly minor oversight opens a dangerous door. If an attacker can inject a dictionary with this specific key into a data stream—for example, via an LLM’s response metadata—they can trick the system into executing internal logic during the loading process.
What makes this vulnerability particularly concerning is its potential attack vector: Prompt Injection.
According to the report, “The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection”.
In a real-world scenario, an attacker could manipulate an LLM to output a specific JSON structure. When the application processes this output, it unknowingly deserializes the payload. The consequences can be severe.
“Attackers who control serialized data can extract environment variable secrets,” the report warns. By injecting a payload like {“lc”: 1, “type”: “secret”, “id”: [“ENV_VAR”]}, an attacker can force the application to resolve and reveal hidden API keys or passwords, especially if the application is running with the legacy setting secrets_from_env=True.
Beyond data theft, the flaw also allows for the instantiation of arbitrary classes within trusted namespaces, potentially leading to “side effects such as network calls or file operations”.
The vulnerability impacts a wide range of versions across the LangChain ecosystem:
- LangChain Core: Versions < 0.3.81
- LangChain: Versions < 1.2.5 and >= 1.0.0
The maintainers have released patched versions to address the flaw. Developers are strongly urged to upgrade immediately to:
- LangChain 1.2.5
- LangChain Core 0.3.81
The patch fixes the escaping logic in the serialization functions, ensuring that user-controlled “lc” keys are treated as harmless data rather than actionable commands.
Related Posts:
- Critical Flaws in LangChain Expose Millions of AI Apps to Attack
- AI’s Dark Side: Hackers Harnessing ChatGPT and LLMs for Malicious Attacks
- Google’s Agentic AI Security Team Develops Framework to Combat Prompt Injection Attacks
- Critical Flaws in AI Browse Agents: Exposed to Credential Theft and Hijacking