CVE-2025-65106 is a low severity vulnerability with a CVSS score of 0.0. No known exploits currently, and patches are available.
Very low probability of exploitation
EPSS predicts the probability of exploitation in the next 30 days based on real-world threat data, complementing CVSS severity scores with actual risk assessment.
A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.
Templates allow attribute access (.) and indexing ([]) but not method invocation (()).
The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.
The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.
langchain-core packagetemplate_format="f-string") - Vulnerability fixedtemplate_format="mustache") - Defensive hardeningtemplate_format="jinja2") - Defensive hardeningAttackers who can control template strings (not just template variables) can:
__class__, __globals__)Please cite this page when referencing data from Strobes VI. Proper attribution helps support our vulnerability intelligence research.
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{msg.__class__.__name__}")],
template_format="f-string"
)
# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})
# Previously returned
# >>> result.messages[0].content
# >>> 'str'
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
msg = HumanMessage("Hello")
# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{{question.__class__.__name__}}")],
template_format="mustache"
)
result = malicious_template.invoke({"question": msg})
# Previously returned: "HumanMessage" (getattr() exposed internals)
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
msg = HumanMessage("Hello")
# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{{question.parse_raw}}")],
template_format="jinja2"
)
result = malicious_template.invoke({"question": msg})
# Could access non-dunder attributes/methods on objects
string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
from string import Formatter
template = "{msg.__class__} and {x}"
print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
# Returns: ['msg.__class__', 'x']
The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.getattr() as a fallback to support accessing attributes on objects (e.g., {{user.name}} on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objectsSandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
passed to templates.You are affected if your application:
Example vulnerable code:
# User controls the template string itself
user_template_string = request.json.get("template") # DANGEROUS
prompt = ChatPromptTemplate.from_messages(
[("human", user_template_string)],
template_format="mustache"
)
result = prompt.invoke({"data": sensitive_object})
You are NOT affected if:
Example safe code:
# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
[("human", "User question: {question}")], # SAFE
template_format="f-string"
)
# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
{obj.attr}, {obj[0]}, or {obj.__class__}{variable_name}# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
[("human", "{msg.__class__}")], # ValueError: Invalid variable name
template_format="f-string"
)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
getattr() fallback with strict type checkingdict, list, and tuple types# After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
[("human", "{{msg.__class__}}")],
template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})
# Returns: "" (access blocked)
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
_RestrictedSandboxedEnvironment that blocks ALL attribute/method accessSecurityError on any attribute access attempt# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
[("human", "{{msg.content}}")],
template_format="jinja2"
)
# Raises SecurityError: Access to attributes is not allowed
Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.
langchain-coreHumanMessage, AIMessage, etc.) without templates## ContextA template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.
Templates allow attribute access (.) and indexing ([]) but not method invocation (()).
The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.
The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.
langchain-core packagetemplate_format="f-string") - Vulnerability fixedtemplate_format="mustache") - Defensive hardeningtemplate_format="jinja2") - Defensive hardeningAttackers who can control template strings (not just template variables) can:
__class__, __globals__)Before Fix:
from langchain_core.prompts import ChatPromptTemplate
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{msg.__class__.__name__}")],
template_format="f-string"
)
# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})
# Previously returned
# >>> result.messages[0].content
# >>> 'str'
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
msg = HumanMessage("Hello")
# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{{question.__class__.__name__}}")],
template_format="mustache"
)
result = malicious_template.invoke({"question": msg})
# Previously returned: "HumanMessage" (getattr() exposed internals)
Before Fix:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage
msg = HumanMessage("Hello")
# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
[("human", "{{question.parse_raw}}")],
template_format="jinja2"
)
result = malicious_template.invoke({"question": msg})
# Could access non-dunder attributes/methods on objects
string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
from string import Formatter
template = "{msg.__class__} and {x}"
print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
# Returns: ['msg.__class__', 'x']
The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.getattr() as a fallback to support accessing attributes on objects (e.g., {{user.name}} on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objectsSandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
passed to templates.You are affected if your application:
Example vulnerable code:
# User controls the template string itself
user_template_string = request.json.get("template") # DANGEROUS
prompt = ChatPromptTemplate.from_messages(
[("human", user_template_string)],
template_format="mustache"
)
result = prompt.invoke({"data": sensitive_object})
You are NOT affected if:
Example safe code:
# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
[("human", "User question: {question}")], # SAFE
template_format="f-string"
)
# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})
F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:
{obj.attr}, {obj[0]}, or {obj.__class__}{variable_name}# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
[("human", "{msg.__class__}")], # ValueError: Invalid variable name
template_format="f-string"
)
As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:
getattr() fallback with strict type checkingdict, list, and tuple types# After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
[("human", "{{msg.__class__}}")],
template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})
# Returns: "" (access blocked)
As defensive hardening, we've significantly restricted Jinja2 template capabilities:
_RestrictedSandboxedEnvironment that blocks ALL attribute/method accessSecurityError on any attribute access attempt# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
[("human", "{{msg.content}}")],
template_format="jinja2"
)
# Raises SecurityError: Access to attributes is not allowed
Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.
While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.
Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.
langchain-coreHumanMessage, AIMessage, etc.) without templatesThe Jinja2 hardening introduced in the initial patch has been reverted as of langchain-core 1.1.3. The restriction was not addressing a direct vulnerability but was part of broader defensive hardening. In practice, it significantly limited legitimate Jinja2 usage and broke existing templates. Since Jinja2 is intended to be used only with trusted template sources, the original behavior has been restored. Users should continue to avoid accepting untrusted template strings when using Jinja2, but no security issue exists with trusted templates.