Debugging: From the Inside Out

Software is dynamic and complex. It is a rich artifact that reflects years of design, maintenance, creativity, utility, and changing requirements. Moreover, once executed, the software comes alive: The intentions of the developers are realized (or not) only when the software is put in action. It is this dynamic nature of software that intrigues ISR Prof. James A. Jones.

Jones conducts research with an aim to improve the quality of software and the efficiency with which it is developed and maintained. He specifically is interested in the research areas of software analysis, testing, and visualization. But it is the complex inner workings of the system — workings so complex that often no single developer can fully fathom all of the component mechanisms and their interrelationships — that  fascinate Jones and inform his research efforts. 

“It’s like a clock. The face, with its minute and hour hands, performs a function, but it’s not as interesting as opening it up and seeing the cogs move and mesh — this is where the magic happens,” remarks Jones. “There are so many software artifacts that we can study: the static structure of the code, the dynamic behavior of the running system, the paths executed through the instructions, the values of the variables, the evolution of the system in a version control repository, the bug reports, and so on,” Jones observes. “It’s like peeking under the hood and figuring out what’s happening.”

As an example of this dynamic information, the software testing process of many software organizations produces “coverage information.” This coverage information is produced as a part of white-box testing to assess the extent to which the software under test has been exercised. The original program is “instrumented” to cause it to keep track of which instructions of the program were exercised by each test case. The goal is to ensure that all parts of the system are being tested — otherwise, the developers should have little confidence in the correctness of the parts of the software that have never been tested or executed.

Jones noticed that this coverage information can be a wealth of residual evidence that can be used to reveal how the software executes beyond its original intent of determining testing adequacy. The presence of this information caused Jones to question how the coverage information could be used to automatically suggest where bugs might lie in the program in the event of test case failures. The idea is to use statistical inferencing techniques to identify locations in the code that are most suspicious of causing failures. Jones likes to explain it as such: “Each instruction in the program likely influences the result of the test case. When the program fails, one of those instructions that were executed is likely to be the cause of the failure. Thus, if we can look at the aggregate of all of the test case passes and failures, we can start to infer which instructions are more suspicious of causing the failures. Those instructions more often executed in a failing context and least often in the passing context are suspicious of being the bug that caused those failures.”

Together with Professors Mary Jean Harrold and John Stasko at Georgia Tech, Jones created a technique and a tool called Tarantula that automatically finds parts of a program’s structure that are suspicious of causing failures. The output of the tool is presented to the developer with a high-level view of the internal components of the program, with the suspiciousness of the components encoded with color: red denotes suspiciousness and green denotes safety. For the programs and bugs studied, Jones found that up to 90% of the bugs were identified by the tool. 

Another use of this execution information from the testing process allows Jones to automatically classify failures according to the bugs that caused them. Each execution produces a “fingerprint” of the execution behavior — the paths and profiles exercised in the system. These fingerprints are used by Jones and colleagues to cluster similar executions in an attempt to identify grouping of similar failures that were caused by similar bugs. This is work that Jones started with Profs. Mary Jean Harrold and James Bowring at Georgia Tech and continues today with undergraduate researchers Frank Morales and Jordaniel Wolk at UCI. They have shown that automatically grouping failures enables programmers to debug multiple bugs independently, resulting in a 50% savings in debugging costs.

Another debugging front Jones is exploring is the process through which programmers build their understanding and context of the behavior of the program when debugging with traditional debuggers. Together with undergraduate researchers Carter Jones and Donald Stern, Jones is building systems that will enable programmers to “steer” their exploration of the code through related and suspicious parts of the program to identify and understand the bugs in order to fix them. 

The prevention of bugs in the first place is the ideal. With this goal in mind, a system is being developed that analyzes the structure of the program and the live changes that are being made by multiple developers on a distributed development project. This work is being conducted with Jones’s Ph.D. student Francisco Servant and collaborator, ISR Prof. André van der Hoek. The system enables developers to be immediately aware of each other’s code changes and their impact at the moment they make them. The goal is to prevent bugs that can be introduced by a lack of awareness of this impact. Their prototype implementation displays “Spheres of Influence” on a visualization of the program for each change to the code. Overlapping spheres should prompt developers to contact one another to ensure that new bugs or incompatible expectations of the software are not introduced to the code base.

Jones, with his collaborators, is currently pursuing these projects and others in an effort to better understand software systems, produce programs with fewer bugs, and debug the systems more efficiently. And as a fortunate side effect, as Jones puts it, he will continue to be “fascinated to see the cogs whir.” 

 For more on Prof. Jones’s research, see:  http://isr.uci.edu/~jajones/

 
This article appeared in ISR Connector issue: