UC Davis Team Enhances Error Detection in C Programs with LLM-Integrated Analysis

Researchers from the University of California Davis have developed a novel approach that combines Large Language Models (LLMs) with static program analysis, known as interleaving static analysis and LLM prompting. The method alternates between static analysis and LLM queries during program analysis, improving error specification inference in systems code. The approach was tested on real-world C programs, achieving higher recall and F1 scores across all benchmarks, while maintaining precision. The method could potentially improve the accuracy and efficiency of static analysis tools, aiding in software testing and debugging. Further research is needed to explore its full potential and limitations.

What is the Interleaving Static Analysis and LLM Prompting Approach?

The paper, authored by Patrick J. Chapman, Cindy Rubio González, and Aditya V. Thakur from the University of California Davis, introduces a novel approach that combines Large Language Models (LLMs) with static program analysis. This method, referred to as interleaving static analysis and LLM prompting, involves alternating between calls to the static analyzer and queries to the LLM during program analysis. The prompt used to query the LLM is constructed using intermediate results from the static analysis, and the result from the LLM query is then used for subsequent analysis of the program.

This approach is applied to the problem of error specification inference in systems code written in C. In other words, it is used to infer the set of values returned by each function upon error. This can aid in program understanding as well as in finding error-handling bugs. The authors evaluate their approach on real-world C programs such as MbedTLS and zlib by incorporating LLMs into EESI, a state-of-the-art static analysis for error specification inference.

Compared to EESI, the authors’ approach achieves higher recall across all benchmarks, from an average of 52.55% to 77.83%, and a higher F1 score, from an average of 0.612 to 0.804, while maintaining precision from an average of 86.67% to 85.12%.

How Does the Interleaving Static Analysis and LLM Prompting Approach Work?

The interleaving static analysis and LLM prompting approach is a unique method that leverages the reasoning abilities of LLMs to improve static program analysis. LLMs have been shown to demonstrate impressive reasoning abilities in natural and programming language tasks via few-shot and chain-of-thought prompting. The approach presented in this paper utilizes this reasoning ability of LLMs when the static analysis is unable to make progress.

The results of the query to the LLM are used for subsequent analysis. Furthermore, the query or prompt to the LLM incorporates the current results of the static analysis, which enables it to make further progress. This iterative process of alternating between static analysis and LLM prompting allows for a more comprehensive and accurate analysis of the program.

What is the Significance of the Interleaving Static Analysis and LLM Prompting Approach?

The significance of the interleaving static analysis and LLM prompting approach lies in its potential to improve program understanding and error detection. By inferring the set of values returned by each function upon error, this approach can aid in understanding the program’s behavior and identifying potential error-handling bugs.

The authors’ evaluation of their approach on real-world C programs demonstrates its effectiveness. By incorporating LLMs into EESI, a state-of-the-art static analysis for error specification inference, they were able to achieve higher recall and F1 scores across all benchmarks, while maintaining precision. This suggests that the interleaving static analysis and LLM prompting approach could be a valuable tool for improving the accuracy and efficiency of static program analysis.

What are the Potential Applications of the Interleaving Static Analysis and LLM Prompting Approach?

The interleaving static analysis and LLM prompting approach has potential applications in a variety of areas related to program analysis and error detection. For instance, it could be used to improve the accuracy and efficiency of static analysis tools, which are commonly used in software development to detect potential errors and vulnerabilities in code.

Furthermore, by aiding in the understanding of program behavior and the identification of error-handling bugs, this approach could also be useful in software testing and debugging. It could potentially be incorporated into automated testing tools to improve their ability to detect and diagnose errors.

What are the Limitations and Future Directions of the Interleaving Static Analysis and LLM Prompting Approach?

While the interleaving static analysis and LLM prompting approach shows promise, it is important to note that it is still a relatively new method and further research is needed to fully understand its potential and limitations. For instance, it would be interesting to explore how this approach could be adapted and applied to other programming languages and types of code.

Furthermore, while the authors’ evaluation of their approach demonstrates its effectiveness, it would be beneficial to conduct further evaluations on a wider range of programs and benchmarks to validate these results. Future research could also explore ways to further optimize the interleaving process and improve the efficiency of the LLM prompting.

Publication details: “Interleaving Static Analysis and LLM Prompting”
Publication Date: 2024-06-20
Authors: Patrick J. Chapman, Cindy Rubio-González and Aditya Thakur
Source:
DOI: https://doi.org/10.1145/3652588.3663317

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

NIST CAISI Issues Request for Information on Securing AI Agent Systems

NIST CAISI Issues Request for Information on Securing AI Agent Systems

January 14, 2026
Honeywell Backed Quantinuum Pursues Public Offering via SEC Filing

Honeywell Backed Quantinuum Pursues Public Offering via SEC Filing

January 14, 2026
Materials Project Cited 32,000 Times, Accelerating Battery & Quantum Computing

Materials Project Cited 32,000 Times, Accelerating Battery & Quantum Computing

January 14, 2026