Adaptive Model for Ranking Code-Based Static Analysis Alerts
Student: Sarah Smith Heckman, North Carolina State University
Advisor: Laurie Williams, North Carolina State University
Abstract: Static analysis tools are useful for finding common
programming mistakes that often lead to field failures. However,
automated static analysis generates a high number of false positive
alerts, requiring manual inspection by the developer to determine if
an alert is a fault. This poster presents an adaptive ranking
model, which ranks static analysis alerts by the likelihood the
alert is a fault in the source code. Alerts are ranked based on the
population of generated alerts; historical developer feedback in the
form of suppressing false positives and fixing true positive alerts;
and historical, application-specific data about the alert ranking
factors. The ordering of alerts generated by the adaptive ranking
model is compared to a baseline of randomly-, optimally-, and static
analysis tool-ordered alerts in a small role-based healthcare
application. The adaptive ranking model provides developers with
81% of the true positive alerts after investigating 1/5 of the
alerts.
Bio:
Sarah Smith Heckman is a 2nd year PhD student in the Department of
Computer Science at North Carolina State University under the
supervision of Dr. Laurie Williams and an intern at IBM. Her
research interests are in software engineering, static analysis, and
testing. Sarah has received the IBM PhD Fellowship in 2006 and
2007. She received her MCS and BS in Computer Science from North
Carolina State University in 2004 and 2005, respectively.
|