Chain-of-Thought Activations Predict Unsafe Responses in Open-Weight Language Models

The increasing sophistication of artificial intelligence models, particularly those employing lengthy reasoning processes known as chains-of-thought, presents growing challenges for ensuring their safe and reliable operation. Yik Siu Chan, Zheng-Xin Yong, and Stephen H. Bach from Brown University investigate whether misalignment , the potential for harmful outputs , can be predicted during a model’s reasoning process, rather than only after it has finished. Their research demonstrates that a simple monitoring technique, analysing the internal workings of these models as they think, significantly outperforms methods that rely on examining the text generated during reasoning. Crucially, the team finds accurate predictions are possible even with only a portion of the reasoning completed, opening the door to real-time safety interventions and a new approach to aligning increasingly powerful AI systems.

Predicting Safety Issues in Reasoning Language Models

Recent advances in artificial intelligence focus on reasoning language models, which demonstrate improved performance on complex tasks by generating detailed chains-of-thought before providing a final answer. However, this approach introduces new safety concerns, as harmful content can appear both within the reasoning process and in the ultimate response. Researchers are now investigating whether it’s possible to predict potential safety issues before a model completes its reasoning, allowing for real-time intervention and improved alignment with human values. This is particularly important as refining models for reasoning skills can inadvertently compromise their original safety protocols.

Currently, assessing the safety of these models relies on examining the generated chains-of-thought, but this presents a significant challenge. These reasoning chains are often unreliable, meaning they don’t accurately reflect the model’s internal thought process and can mislead both human reviewers and AI-powered monitoring systems. The core question is whether the internal workings of the model, captured in its activations, can provide a more reliable signal than the text it generates. New research demonstrates that a surprisingly simple method, analyzing the model’s internal activations, outperforms all text-based monitoring approaches.

By training a basic analytical tool on these activations, researchers can accurately predict whether a model will produce a safe or unsafe response, even with limited data. Importantly, this predictive capability emerges early in the reasoning process, offering the potential for real-time monitoring and intervention. These findings have significant implications for the development of safer and more reliable AI systems. By focusing on the model’s internal state rather than its textual output, researchers can create more effective monitoring systems that can identify and mitigate potential safety risks before they manifest in harmful responses. This proactive approach promises to enhance the trustworthiness of reasoning language models and unlock their full potential for beneficial applications.

Reasoning Steps Predict Unsafe Language Model Outputs

Researchers have discovered that a surprisingly simple method, training a basic linear classifier on the internal workings of large language models, significantly outperforms more complex approaches in predicting whether the model will generate a safe or unsafe response. This research addresses a critical challenge: large language models now generate reasoning steps, known as chains-of-thought, before providing a final answer, which improves performance but also introduces risks of harmful content appearing in these intermediate steps. The team investigated whether these reasoning steps could be used to predict potential misalignment before it is generated. The findings reveal that analyzing the language of these reasoning steps directly, using either human reviewers or advanced language models, is often unreliable, as the reasoning can be misleading.

Instead, a linear classifier trained on the model’s internal “activations”, the numerical representations of information within the network, proves to be a far more accurate predictor of safety. This suggests that the model itself “knows” whether its reasoning is heading towards a problematic outcome, even before expressing it in language. Importantly, this predictive capability emerges early in the reasoning process, allowing for potential intervention before the final response is produced. The performance of this simple linear classifier surpasses that of human reviewers, sophisticated language models, and even fine-tuned classifiers designed to detect harmful content.

On a challenging test set, the linear classifier achieved a significantly higher F1 score, a measure of prediction accuracy, than all other methods. Furthermore, the classifier’s performance remains consistent even as the length of the reasoning steps increases, while more complex methods struggle with longer chains-of-thought. This research has significant implications for the development of safer and more reliable large language models. By providing a lightweight and accurate method for real-time safety monitoring, it opens the door to early intervention strategies that could prevent the generation of harmful content. The team demonstrated this approach across different model sizes, families, and safety benchmarks, suggesting its broad applicability and potential for widespread adoption. The discovery that internal model representations are more predictive than the language itself offers a new avenue for understanding and controlling the behavior of these powerful AI systems.

Internal Activations Predict Reasoning Model Alignment

This research investigates the ability to predict safety issues in open-weight reasoning models, which generate chains-of-thought before providing final answers. The team finds that traditional methods, such as relying on human evaluation or analysing the text of the chains-of-thought themselves, often struggle to accurately predict whether a model’s final response will be safe or harmful, as the reasoning process can be misleading. Instead, a simple linear probe trained on the model’s internal activations, the data representing its reasoning process, consistently outperforms these methods in predicting alignment. Importantly, this probe can detect potential misalignment early in the reasoning process, even before the model completes its chain of thought, suggesting its potential as a real-time monitoring tool.

While the research demonstrates a strong correlation between chain-of-thought activations and output alignment, the precise causal mechanisms remain unclear. The authors acknowledge that relying on these “black-box” predictors also presents challenges, particularly in ensuring the transparency and legibility of their decisions for human oversight. Future work should explore the generalizability of these findings to other types of models and alignment tasks.

👉 More information
🗞 Can We Predict Alignment Before Models Finish Thinking? Towards Monitoring Misaligned Reasoning Models
🧠 DOI: https://doi.org/10.48550/arXiv.2507.12428

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026