A Validation of Object-Oriented Design Metrics as Quality Indicators
IEEE Transactions on Software Engineering
Operational Profiles in Software-Reliability Engineering
IEEE Software
Software Security: Building Security In
Software Security: Building Security In
Mining metrics to predict component failures
Proceedings of the 28th international conference on Software engineering
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Data Mining Static Code Attributes to Learn Defect Predictors
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering
Predicting defects using network analysis on dependency graphs
Proceedings of the 30th international conference on Software engineering
Searching for a Needle in a Haystack: Predicting Security Vulnerabilities for Windows Vista
ICST '10 Proceedings of the 2010 Third International Conference on Software Testing, Verification and Validation
IEEE Transactions on Software Engineering
Investigating complexity metrics as indicators of software vulnerability
Investigating complexity metrics as indicators of software vulnerability
Seventh international workshop on software engineering for secure systems (SESS 2011)
Proceedings of the 33rd International Conference on Software Engineering
Secure RPC in embedded systems: evaluation of some GlobalPlatform implementation alternatives
Proceedings of the Workshop on Embedded Systems Security
Dowsing for overflows: a guided fuzzer to find buffer boundary violations
SEC'13 Proceedings of the 22nd USENIX conference on Security
Hi-index | 0.00 |
Allocating code inspection and testing resources to the most problematic code areas is important to reduce development time and cost. While complexity metrics collected statically from software artifacts are known to be helpful in finding vulnerable code locations, some complex code is rarely executed in practice and has less chance of its vulnerabilities being detected. To augment the use of static complexity metrics, this study examines execution complexity metrics that are collected during code execution as indicators of vulnerable code locations. We conducted case studies on two large size, widely-used open source projects, the Mozilla Firefox web browser and the Wireshark network protocol analyzer. Our results indicate that execution complexity metrics are better indicators of vulnerable code locations than the most commonly-used static complexity metric, lines of source code. The ability of execution complexity metrics to discriminate vulnerable code locations from neutral code locations and to predict vulnerable code locations vary depending on projects. However, the vulnerability prediction models using execution complexity metrics are superior to the models using static complexity metrics in reducing inspection effort.