Is Finding Security Holes a Good Idea?
IEEE Security and Privacy
Use of relative code churn measures to predict system defect density
Proceedings of the 27th international conference on Software engineering
Managing Cybersecurity Resources (The Mcgraw-Hill Homeland Security Series)
Managing Cybersecurity Resources (The Mcgraw-Hill Homeland Security Series)
MSR '05 Proceedings of the 2005 international workshop on Mining software repositories
Modeling the Vulnerability Discovery Process
ISSRE '05 Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering
Measuring the attack surfaces of two FTP daemons
Proceedings of the 2nd ACM workshop on Quality of protection
Security Metrics: Replacing Fear, Uncertainty, and Doubt
Security Metrics: Replacing Fear, Uncertainty, and Doubt
Data Mining Static Code Attributes to Learn Defect Predictors
IEEE Transactions on Software Engineering
Milk or wine: does software security improve with age?
USENIX-SS'06 Proceedings of the 15th conference on USENIX Security Symposium - Volume 15
Predicting Defects for Eclipse
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Comments on "Data Mining Static Code Attributes to Learn Defect Predictors"
IEEE Transactions on Software Engineering
Predicting vulnerable software components
Proceedings of the 14th ACM conference on Computer and communications security
Predicting Defective Software Components from Code Complexity Measures
PRDC '07 Proceedings of the 13th Pacific Rim International Symposium on Dependable Computing
Comparing design and code metrics for software quality prediction
Proceedings of the 4th international workshop on Predictor models in software engineering
An empirical model to predict security vulnerabilities using code complexity metrics
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Is complexity really the enemy of software security?
Proceedings of the 4th ACM workshop on Quality of protection
Firefox (In) security update dynamics exposed
ACM SIGCOMM Computer Communication Review
Review: A systematic review of software fault prediction studies
Expert Systems with Applications: An International Journal
Toward Non-security Failures as a Predictor of Security Faults and Failures
ESSoS '09 Proceedings of the 1st International Symposium on Engineering Secure Software and Systems
Predicting Attack-prone Components
ICST '09 Proceedings of the 2009 International Conference on Software Testing Verification and Validation
Secure open source collaboration: an empirical study of linus' law
Proceedings of the 16th ACM conference on Computer and communications security
Predicting defects with program dependencies
ESEM '09 Proceedings of the 2009 3rd International Symposium on Empirical Software Engineering and Measurement
Using complexity, coupling, and cohesion metrics as early indicators of vulnerabilities
Journal of Systems Architecture: the EUROMICRO Journal
Predicting vulnerable software components with dependency graphs
Proceedings of the 6th International Workshop on Security Measurements and Metrics
After-life vulnerabilities: a study on firefox evolution, its vulnerabilities, and fixes
ESSoS'11 Proceedings of the Third international conference on Engineering secure software and systems
An empirical study on using the national vulnerability database to predict software vulnerabilities
DEXA'11 Proceedings of the 22nd international conference on Database and expert systems applications - Volume Part I
An idea of an independent validation of vulnerability discovery models
ESSoS'12 Proceedings of the 4th international conference on Engineering Secure Software and Systems
Comparing and applying attack surface metrics
Proceedings of the 4th international workshop on Security measurements and metrics
Hi-index | 0.00 |
Recent years have seen a trend towards the notion of quantitative security assessment and the use of empirical methods to analyze or predict vulnerable components. Many papers focused on vulnerability discovery models based upon either a public vulnerability databases (e.g., CVE, NVD), or vendor ones (e.g., MFSA). Some combine these databases. Most of these works address a knowledge problem: can we understand the empirical causes of vulnerabilities? Can we predict them? Still, if the data sources do not completely capture the phenomenon we are interested in predicting, then our predictor might be optimal with respect to the data we have but unsatisfactory in practice. In our work, we focus on a more fundamental question: the quality of vulnerability database. We provide an analytical comparison of different security metric papers and the relative data sources. We also show, based on experimental data for Mozilla Firefox, how using different data sources might lead to completely different results.