Ce mail provient de l'extérieur, restons vigilants

=====================================================================

                            CERT-Renater

                Note d'Information No. 2025/VULN902
_____________________________________________________________________

DATE                : 24/12/2025

HARDWARE PLATFORM(S): /

OPERATING SYSTEM(S): Systems running langchain-core (pip) versions
                               prior to 1.2.5, 0.3.81
                   langchain (npm) versions prior to 1.2.3, 0.3.37.

=====================================================================
https://github.com/langchain-ai/langchain/security/advisories/GHSA-c67j-w6g6-q2cm
https://github.com/advisories/GHSA-r399-636x-v7f6
_____________________________________________________________________


LangChain serialization injection vulnerability enables secret
extraction in dumps/loads APIs
Critical
eyurtsev published GHSA-c67j-w6g6-q2cm Dec 23, 2025

Package
langchain-core (pip)

Affected versions
< 1.2.5, >= 1.0.0
< 0.3.81

Patched versions
1.2.5
0.3.81


Description

Summary

A serialization injection vulnerability exists in LangChain's dumps() and
dumpd() functions. The functions do not escape dictionaries with 'lc' keys
when serializing free-form dictionaries. The 'lc' key is used internally
by LangChain to mark serialized objects. When user-controlled data
contains this key structure, it is treated as a legitimate LangChain object
during deserialization rather than plain user data.


Attack surface

The core vulnerability was in dumps() and dumpd(): these functions failed to
escape user-controlled dictionaries containing 'lc' keys. When this unescaped
data was later deserialized via load() or loads(), the injected structures
were treated as legitimate LangChain objects rather than plain user data.

This escaping bug enabled several attack vectors:

    Injection via user data: Malicious LangChain object structures could be
injected through user-controlled fields like metadata, additional_kwargs, or
response_metadata
    Class instantiation within trusted namespaces: Injected manifests could
instantiate any Serializable subclass, but only within the pre-approved
trusted namespaces (langchain_core, langchain, langchain_community). This
includes classes with side effects in __init__ (network calls, file
operations, etc.). Note that namespace validation was already enforced before
this patch, so arbitrary classes outside these trusted namespaces could not
be instantiated.


Security hardening

This patch fixes the escaping bug in dumps() and dumpd() and introduces new
restrictive defaults in load() and loads(): allowlist enforcement via
allowed_objects="core" (restricted to serialization mappings), secrets_from_env
changed from True to False, and default Jinja2 template blocking via
init_validator. These are breaking changes for some use cases.


Who is affected?

Applications are vulnerable if they:

    Use astream_events(version="v1") — The v1 implementation internally uses
vulnerable serialization. Note: astream_events(version="v2") is not vulnerable.
    Use Runnable.astream_log() — This method internally uses vulnerable
serialization for streaming outputs.
    Call dumps() or dumpd() on untrusted data, then deserialize with load() or
loads() — Trusting your own serialization output makes you vulnerable if
user-controlled data (e.g., from LLM responses, metadata fields, or user
inputs) contains 'lc' key structures.
    Deserialize untrusted data with load() or loads() — Directly deserializing
untrusted data that may contain injected 'lc' structures.
    Use RunnableWithMessageHistory — Internal serialization in message history
handling.
    Use InMemoryVectorStore.load() to deserialize untrusted documents.
    Load untrusted generations from cache using langchain-community caches.
    Load untrusted manifests from the LangChain Hub via hub.pull.
    Use StringRunEvaluatorChain on untrusted runs.
    Use create_lc_store or create_kv_docstore with untrusted documents.
    Use MultiVectorRetriever with byte stores containing untrusted documents.
    Use LangSmithRunChatLoader with runs containing untrusted messages.

The most common attack vector is through LLM response fields like
additional_kwargs or response_metadata, which can be controlled via prompt
injection and then serialized/deserialized in streaming operations.


Impact

Attackers who control serialized data can extract environment variable secrets
by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment
variables during deserialization (when secrets_from_env=True, which was the old
default). They can also instantiate classes with controlled parameters by
injecting constructor structures to instantiate any class within trusted
namespaces with attacker-controlled parameters, potentially triggering side
effects such as network calls or file operations.

Key severity factors:

    Affects the serialization path - applications trusting their own
serialization output are vulnerable
    Enables secret extraction when combined with secrets_from_env=True (the
old default)
    LLM responses in additional_kwargs can be controlled via prompt injection

Exploit example

from langchain_core.load import dumps, load
import os

# Attacker injects secret structure into user-controlled data
attacker_dict = {
    "user_data": {
        "lc": 1,
        "type": "secret",
        "id": ["OPENAI_API_KEY"]
    }
}

serialized = dumps(attacker_dict)  # Bug: does NOT escape the 'lc' key

os.environ["OPENAI_API_KEY"] = "sk-secret-key-12345"
deserialized = load(serialized, secrets_from_env=True)

print(deserialized["user_data"])  # "sk-secret-key-12345" - SECRET LEAKED!

Security hardening changes (breaking changes)

This patch introduces three breaking changes to load() and loads():

    New allowed_objects parameter (defaults to 'core'): Enforces allowlist of
classes that can be deserialized. The 'all' option corresponds to the list of
objects specified in mappings.py while the 'core' option limits to objects
within langchain_core. We recommend that users explicitly specify which
objects they want to allow for serialization/deserialization.
    secrets_from_env default changed from True to False: Disables automatic
secret loading from environment
    New init_validator parameter (defaults to default_init_validator):
Blocks Jinja2 templates by default


Migration guide

No changes needed for most users

If you're deserializing standard LangChain types (messages, documents, prompts,
trusted partner integrations like ChatOpenAI, ChatAnthropic, etc.), your code
will work without changes:

from langchain_core.load import load

# Uses default allowlist from serialization mappings
obj = load(serialized_data)

For custom classes

If you're deserializing custom classes not in the serialization mappings,
add them to the allowlist:

from langchain_core.load import load
from my_package import MyCustomClass

# Specify the classes you need
obj = load(serialized_data, allowed_objects=[MyCustomClass])

For Jinja2 templates

Jinja2 templates are now blocked by default because they can execute arbitrary code.
If you need Jinja2 templates, pass init_validator=None:

from langchain_core.load import load
from langchain_core.prompts import PromptTemplate

obj = load(
    serialized_data,
    allowed_objects=[PromptTemplate],
    init_validator=None
)

Warning

Only disable init_validator if you trust the serialized data. Jinja2
templates can execute arbitrary Python code.
For secrets from environment

secrets_from_env now defaults to False. If you need to load secrets from
environment variables:

from langchain_core.load import load

obj = load(serialized_data, secrets_from_env=True)


Credits

    Dumps bug was reported by @YardenPorat
    Changes for security hardening due to findings from @0xn3va and
@VladimirEliTokarev


Severity
Critical
9.3/ 10

CVSS v3 base metrics
Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
None
Scope
Changed
Confidentiality
High
Integrity
Low
Availability
None
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:L/A:N

CVE ID
CVE-2025-68664

Weaknesses
No CWEs


Credits

    @0xn3va 0xn3va Reporter
    @yardenporat353 yardenporat353 Reporter
    @VladimirEliTokarev VladimirEliTokarev Reporter
    @eyurtsev eyurtsev Remediation developer
    @ccurme ccurme Remediation developer
    @mdrxy mdrxy Remediation developer
    @hntrl hntrl Remediation developer

_____________________________________________________________________


LangChain serialization injection vulnerability enables secret
extraction
High severity GitHub Reviewed Published Dec 23, 2025 in
langchain-ai/langchainjs • Updated Dec 24, 2025


Vulnerability details

Package
@langchain/core (npm)

Affected versions
>= 1.0.0, < 1.1.8
< 0.3.80

Patched versions
1.1.8
0.3.80

langchain (npm)
Affected versions
>= 1.0.0, < 1.2.3
< 0.3.37
Patched versions
1.2.3
0.3.37


Description
Context

A serialization injection vulnerability exists in LangChain JS's toJSON()
method (and subsequently when string-ifying objects using
JSON.stringify(). The method did not escape objects with 'lc' keys
when serializing free-form data in kwargs. The 'lc' key is used internally
by LangChain to mark serialized objects. When user-controlled data
contains this key structure, it is treated as a legitimate LangChain
object during deserialization rather than plain user data.


Attack surface

The core vulnerability was in Serializable.toJSON(): this method failed
to escape user-controlled objects containing 'lc' keys within kwargs
(e.g., additional_kwargs, metadata, response_metadata). When this
unescaped data was later deserialized via load(), the injected
structures were treated as legitimate LangChain objects rather than
plain user data.

This escaping bug enabled several attack vectors:

    Injection via user data: Malicious LangChain object structures could be
injected through user-controlled fields like metadata, additional_kwargs,
or response_metadata
    Secret extraction: Injected secret structures could extract environment
variables when secretsFromEnv was enabled (which had no explicit default,
effectively defaulting to true behavior)
    Class instantiation via import maps: Injected constructor structures
could instantiate any class available in the provided import maps with
attacker-controlled parameters

Note on import maps: Classes must be explicitly included in import maps to
be instantiatable. The core import map includes standard types
(messages, prompts, documents), and users can extend this via importMap and
optionalImportsMap options. This architecture naturally limits the attack
surface—an allowedObjects parameter is not necessary because users control
which classes are available through the import maps they provide.

Security hardening: This patch fixes the escaping bug in toJSON() and
introduces new restrictive defaults in load(): secretsFromEnv now
explicitly defaults to false, and a maxDepth parameter protects against
DoS via deeply nested structures. JSDoc security warnings have been
added to all import map options.


Who is affected?

Applications are vulnerable if they:

    Serialize untrusted data via JSON.stringify() on Serializable objects,
then deserialize with load() — Trusting your own serialization output makes
you vulnerable if user-controlled data (e.g., from LLM responses, metadata
fields, or user inputs) contains 'lc' key structures.
    Deserialize untrusted data with load() — Directly deserializing
untrusted data that may contain injected 'lc' structures.
    Use LangGraph checkpoints — Checkpoint serialization/deserialization
paths may be affected.

The most common attack vector is through LLM response fields like
additional_kwargs or response_metadata, which can be controlled via prompt
injection and then serialized/deserialized in streaming operations.


Impact

Attackers who control serialized data can extract environment variable
secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load
environment variables during deserialization (when secretsFromEnv: true).
They can also instantiate classes with controlled parameters by injecting
constructor structures to instantiate any class within the provided import
maps with attacker-controlled parameters, potentially triggering side
effects such as network calls or file operations.

Key severity factors:

    Affects the serialization path—applications trusting their own
serialization output are vulnerable
    Enables secret extraction when combined with secretsFromEnv: true
    LLM responses in additional_kwargs can be controlled via prompt
injection

Exploit example

import { load } from "@langchain/core/load";

// Attacker injects secret structure into user-controlled data
const attackerPayload = JSON.stringify({
  user_data: {
    lc: 1,
    type: "secret",
    id: ["OPENAI_API_KEY"],
  },
});

process.env.OPENAI_API_KEY = "sk-secret-key-12345";

// With secretsFromEnv: true, the secret is extracted
const deserialized = await load(attackerPayload, { secretsFromEnv: true });

console.log(deserialized.user_data); // "sk-secret-key-12345" - SECRET LEAKED!

Security hardening changes

This patch introduces the following changes to load():

    secretsFromEnv default changed to false: Disables automatic secret
loading from environment variables. Secrets not found in secretsMap now
throw an error instead of being loaded from process.env. This fail-safe
behavior ensures missing secrets are caught immediately rather than
silently continuing with null.
    New maxDepth parameter (defaults to 50): Protects against denial-of-service
attacks via deeply nested JSON structures that could cause stack overflow.
    Escape mechanism in toJSON(): User-controlled objects containing 'lc' keys
are now wrapped in {"__lc_escaped__": {...}} during serialization and unwrapped
as plain data during deserialization.
    JSDoc security warnings: All import map options
(importMap, optionalImportsMap, optionalImportEntrypoints) now include
security warnings about never populating them from user input.

Migration guide
No changes needed for most users

If you're deserializing standard LangChain types
(messages, documents, prompts)using the core import map, your code will
work without changes:

import { load } from "@langchain/core/load";

// Works with default settings
const obj = await load(serializedData);

For secrets from environment

secretsFromEnv now defaults to false, and missing secrets throw an error.
If you need to load secrets:

import { load } from "@langchain/core/load";

// Provide secrets explicitly (recommended)
const obj = await load(serializedData, {
  secretsMap: { OPENAI_API_KEY: process.env.OPENAI_API_KEY },
});

// Or explicitly opt-in to load from env (only use with trusted data)
const obj = await load(serializedData, { secretsFromEnv: true });

    Warning: Only enable secretsFromEnv if you trust the serialized data.
Untrusted data could extract any environment variable.

    Note: If a secret reference is encountered but not found in secretsMap
(and secretsFromEnv is false or the secret is not in the environment),
an error is thrown. This fail-safe behavior ensures you're aware of
missing secrets rather than silently receiving null values.

For deeply nested structures

If you have legitimate deeply nested data that exceeds the default depth
limit of 50:

import { load } from "@langchain/core/load";

const obj = await load(serializedData, { maxDepth: 100 });

For custom import maps

If you provide custom import maps, ensure they only contain trusted
modules:

import { load } from "@langchain/core/load";
import * as myModule from "./my-trusted-module";

// GOOD - explicitly include only trusted modules
const obj = await load(serializedData, {
  importMap: { my_module: myModule },
});

// BAD - never populate from user input
const obj = await load(serializedData, {
  importMap: userProvidedImports, // DANGEROUS!
});


References

    GHSA-r399-636x-v7f6
    langchain-ai/langchainjs@e5063f9
    https://github.com/langchain-ai/langchainjs/releases/tag/%40langchain%2Fcore%401.1.8
    https://github.com/langchain-ai/langchainjs/releases/tag/langchain%401.2.3
    https://nvd.nist.gov/vuln/detail/CVE-2025-68665

@hntrl hntrl published to langchain-ai/langchainjs Dec 23, 2025
Published to the GitHub Advisory Database Dec 23, 2025
Reviewed Dec 23, 2025
Published by the National Vulnerability Database Dec 23, 2025
Last updated Dec 24, 2025

Severity
High
8.6/ 10

CVSS v3 base metrics
Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
None
Scope
Changed
Confidentiality
High
Integrity
None
Availability
None
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N

EPSS score

Weaknesses
Weakness CWE-502

CVE ID
CVE-2025-68665

GHSA ID
GHSA-r399-636x-v7f6

Source code
langchain-ai/langchainjs


Credits

    @ccurme ccurme Remediation developer
    @mdrxy mdrxy Remediation developer
    @0xn3va 0xn3va Reporter
    @yardenporat353 yardenporat353 Reporter
    @VladimirEliTokarev VladimirEliTokarev Reporter
    @hntrl hntrl Remediation developer
    @siewer siewer Reporter
    @jacoblee93 jacoblee93 Remediation verifier

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.


=========================================================
+ CERT-RENATER        |    tel : 01-53-94-20-44         +
+ 23/25 Rue Daviel    |    fax : 01-53-94-20-41         +
+ 75013 Paris         |   email:cert@support.renater.fr +
=========================================================




