Large Language Models Predict Traffic Impact From Incident Reports Now

Predicting the consequences of traffic incidents remains a critical challenge for transport network management, demanding accurate and timely assessments of disruption. Current approaches frequently rely on extensive, labelled datasets for training machine learning models, a process that is both time-consuming and resource intensive. New research explores an alternative utilising large language models (LLMs), a type of artificial intelligence designed to understand and generate human language, to forecast the impact of incidents without requiring prior training on specific prediction tasks. George Jagadeesh, Srikrishna Iyer, Michal Polanowski, and Kai Xin Thia detail their investigation in ‘Application and Evaluation of Large Language Models for Forecasting the Impact of Traffic Incidents’, demonstrating a viable solution that leverages LLMs and free-text incident logs to achieve comparable accuracy to established machine learning techniques.

Traffic congestion imposes a considerable economic burden, with unpredictable incidents being a primary contributor to non-recurring delays impacting travel times, productivity, and fuel consumption. Accurate prediction of how a traffic incident will evolve and affect traffic flow is therefore valuable for both travellers seeking alternative routes and traffic management centres responding to the event, yet the inherent randomness of incidents makes reliable forecasting challenging. This necessitates innovative approaches, prompting investigation into the potential of large language models (LLMs) to address this complex problem.

Recent advances in LLMs offer a potentially transformative approach, demonstrating an ability known as in-context learning, allowing them to perform new tasks based on a few provided examples without requiring extensive retraining. This capability bypasses the need for large labelled datasets, offering a significant advantage in dynamic environments where data is limited or constantly changing, while LLMs also excel at processing and extracting information from unstructured text, unlocking the potential of previously untapped data sources like emergency responder reports.

This study investigates the potential of large language models, or LLMs, to predict the impact of traffic incidents on traffic flow, offering a potentially advantageous alternative to traditional machine learning approaches. Unlike conventional methods which necessitate extensive, labelled training datasets, LLMs demonstrate the capacity to forecast incident impact utilising readily available free-text incident logs and real-time traffic data, streamlining the process and reducing reliance on costly data preparation. The core of this research lies in a fully LLM-based solution, where predictions are generated by combining current traffic characteristics with information extracted from incident descriptions by the LLM itself, circumventing the need for manual feature engineering.

A crucial element of this approach is a novel method for selecting relevant examples to include within the LLM’s ‘prompt’ – essentially, the instructions given to the model, guiding it towards more accurate predictions by providing it with analogous situations from past incidents. This ‘in-context learning’ technique allows the LLM to draw parallels and extrapolate likely outcomes, significantly improving predictive accuracy. The researchers rigorously tested three advanced LLMs – GPT 4.1, Claude 3.7 Sonnet, and Gemini 2.0 Flash – alongside two established machine learning models known for their effectiveness in traffic prediction, employing a comprehensive dataset sourced from the California Department of Transportation’s PeMS system.

The selection of examples for the LLM’s prompt was not random; the researchers developed a method to identify incidents most similar to the current situation, based on factors like incident type, location, and time of day. This targeted approach significantly improved the LLM’s predictive accuracy, demonstrating the importance of carefully crafted prompts in eliciting optimal performance, and the study focused on predicting traffic impact over two time horizons: 15 and 30 minutes following an incident, allowing for a nuanced assessment of the LLM’s ability to capture both immediate and longer-term effects.

Results indicate that the most proficient LLMs, specifically GPT-4.1 and Claude 3.7 Sonnet, achieve prediction accuracies comparable to those of the traditional machine learning models, accomplishing this without any specific training on traffic incident prediction, demonstrating their capacity for zero-shot or few-shot learning. GPT-4.1 exhibits superior performance in predicting 15-minute delays, while Claude 3.7 Sonnet excels at forecasting 30-minute impacts, suggesting nuanced strengths across different prediction horizons.

This finding demonstrates that LLMs can perform competitively without undergoing specific training for this particular task, leveraging pre-existing knowledge embedded within the LLM, combined with the effective example selection method, allowing it to generalise from past incidents and accurately forecast future impacts. This suggests a potential paradigm shift in intelligent transportation systems, where LLMs can be rapidly deployed to predict and mitigate the effects of traffic incidents, without the need for extensive data preparation and model training.

The study proposes a system where LLMs predict incident duration using a combination of structured traffic data and information extracted from incident descriptions, employing an innovative method for selecting illustrative examples, termed ‘in-context learning’, to guide the LLM’s predictions. The authors demonstrate that carefully chosen examples significantly enhance predictive accuracy, outperforming random selection and highlighting the importance of prompt engineering.

This research provides a promising pathway towards improved traffic management and incident response systems, underscoring the potential of LLMs to address complex real-world problems without the need for extensive model training, representing a significant advancement in the field of intelligent transportation systems. By leveraging pre-existing knowledge and reasoning capabilities, LLMs offer a potentially more flexible and adaptable solution for intelligent transportation systems.

The ability to achieve comparable results without extensive training represents a substantial efficiency gain and opens possibilities for rapid deployment in real-world scenarios. Enriching the feature set with additional contextual information, such as weather conditions or road geometry, promises to improve predictive accuracy. Exploring more sophisticated prompt engineering techniques, including the incorporation of expert knowledge regarding incident scenarios, represents another avenue for enhancement, and continued research should also investigate the potential of leveraging advancements in LLM capabilities. As models become more powerful and efficient, their application to traffic incident prediction will likely become even more effective, contributing to the creation of more responsive and resilient transportation networks.

👉 More information
🗞 Application and Evaluation of Large Language Models for Forecasting the Impact of Traffic Incidents
🧠 DOI: https://doi.org/10.48550/arXiv.2507.04803

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Scientists Guide Zapata's Path to Fault-Tolerant Quantum Systems

Scientists Guide Zapata’s Path to Fault-Tolerant Quantum Systems

December 22, 2025
NVIDIA’s ALCHEMI Toolkit Links with MatGL for Graph-Based MLIPs

NVIDIA’s ALCHEMI Toolkit Links with MatGL for Graph-Based MLIPs

December 22, 2025
New Consultancy Helps Firms Meet EU DORA Crypto Agility Rules

New Consultancy Helps Firms Meet EU DORA Crypto Agility Rules

December 22, 2025