Adversarial Inputs Amplify Reasoning Costs in Large Language Models.

Recent research demonstrates the vulnerability of large language models, such as DeepSeek-R1 and OpenAI o1, to adversarial inputs designed to amplify computational cost. A novel loss framework, incorporating priority cross-entropy, excessive reasoning and delayed termination losses, increases reasoning length by threefold to ninefold without impacting performance, and exhibits transferability across multiple models.

The escalating computational demands of large language models (LLMs) present a growing challenge, not only in terms of energy consumption but also potential vulnerability to adversarial manipulation. Researchers now demonstrate a method to subtly increase the reasoning length of these models, thereby substantially raising their computational cost without noticeably affecting the quality of their responses. Wai Man Si, Mingjie Li, Michael Backes, and Yang Zhang, all affiliated with the CISPA Helmholtz Center for Information Security, detail this ‘excessive reasoning attack’ in their paper, Excessive Reasoning Attack on Reasoning LLMs. Their work reveals that carefully crafted inputs can exploit inherent behaviours within LLMs, prompting them to engage in redundant or unnecessarily complex thought processes, and highlights a previously unrecognised security risk associated with increasingly sophisticated artificial intelligence.
Recent research reveals that large language models (LLMs) are susceptible to adversarial inputs which deliberately increase computational costs, highlighting a critical area for improvement in artificial intelligence systems. Researchers demonstrate that specifically crafted inputs can induce excessive reasoning within LLMs, substantially increasing processing time without diminishing the model’s ability to generate correct answers. This vulnerability arises from the tendency of current LLM architectures to either insufficiently analyse complex problems or overthink simpler ones, a behaviour that adversarial examples effectively amplify and exploit.

The core of this work centres on a novel loss framework, comprising three distinct components, designed to regulate reasoning behaviour during model training and mitigate these vulnerabilities. Priority Cross-Entropy Loss refines the standard training objective by prioritising key tokens, leveraging the autoregressive nature of LLMs – where the model predicts the next token in a sequence based on previous tokens – to focus on crucial elements within the reasoning process and improve efficiency. Excessive Reasoning Loss actively encourages the model to explore multiple reasoning paths during inference, promoting a more thorough, though potentially computationally expensive, approach to problem-solving and enhancing robustness. Finally, Delayed Termination Loss extends the reasoning process, deferring the generation of final outputs to allow for more comprehensive analysis and increase computational load, while maintaining accuracy.

Experiments utilising datasets such as GSM8K, a benchmark for mathematical problem solving, and models including Llama 2, demonstrate that the proposed loss framework effectively increases computational cost without impacting answer accuracy. Researchers obtained the models from Hugging Face, a platform facilitating access to pre-trained models and datasets, and adhered to all licenses and data usage agreements throughout the research process, ensuring ethical and responsible AI development. This work contributes to a growing body of knowledge concerning the efficiency and robustness of LLMs, offering a pathway towards more sustainable and cost-effective artificial intelligence systems.

Future work should focus on developing robust defense mechanisms against these adversarial attacks and exploring methods to dynamically regulate reasoning length based on input complexity. Incorporating cost-aware training strategies represents a promising avenue for future research, potentially enabling LLMs to balance accuracy and efficiency. Investigating the interplay between model size, training data, and susceptibility to excessive reasoning will be crucial for deploying LLMs in resource-constrained environments and ensuring their efficient operation.

Detailed ablation studies, presented in supplementary tables, investigate the impact of various loss objectives and target construction strategies on model performance, providing granular insights into the mechanisms driving the observed increases in reasoning length. Visualizations of token usage during trajectory expansion further illuminate these mechanisms, providing a deeper understanding of the model’s internal processes.

👉 More information
🗞 Excessive Reasoning Attack on Reasoning LLMs
🧠 DOI: https://doi.org/10.48550/arXiv.2506.14374

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

From Big Bang to AI, Unified Dynamics Enables Understanding of Complex Systems

From Big Bang to AI, Unified Dynamics Enables Understanding of Complex Systems

December 20, 2025
Xanadu Fault Tolerant Quantum Algorithms For Cancer Therapy

Xanadu Fault Tolerant Quantum Algorithms For Cancer Therapy

December 20, 2025
NIST Research Opens Path for Molecular Quantum Technologies

NIST Research Opens Path for Molecular Quantum Technologies

December 20, 2025