The Detection of Fault-Prone Programs
IEEE Transactions on Software Engineering
Cognitive bias in software engineering
Communications of the ACM
Proceedings of the Conference on The Future of Software Engineering
Predicting Fault Incidence Using Software Change History
IEEE Transactions on Software Engineering
Elements of Software Science (Operating and programming systems series)
Elements of Software Science (Operating and programming systems series)
Predicting Fault-Prone Software Modules in Embedded Systems with Classification Trees
HASE '99 The 4th IEEE International Symposium on High-Assurance Systems Engineering
Requirement-Based Automated Black-Box Test Generation
COMPSAC '01 Proceedings of the 25th International Computer Software and Applications Conference on Invigorating Software Development
Benchmarking Attribute Selection Techniques for Discrete Class Data Mining
IEEE Transactions on Knowledge and Data Engineering
Toward a Software Testing and Reliability Early Warning Metric Suite
Proceedings of the 26th International Conference on Software Engineering
ISSTA '04 Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis
IEEE Transactions on Software Engineering
Looking for bugs in all the right places
Proceedings of the 2006 international symposium on Software testing and analysis
Data Mining Static Code Attributes to Learn Defect Predictors
IEEE Transactions on Software Engineering
Using Developer Information as a Factor for Fault Prediction
PROMISE '07 Proceedings of the Third International Workshop on Predictor Models in Software Engineering
Automating algorithms for the identification of fault-prone files
Proceedings of the 2007 international symposium on Software testing and analysis
The Effects of Over and Under Sampling on Fault-prone Module Detection
ESEM '07 Proceedings of the First International Symposium on Empirical Software Engineering and Measurement
Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study
ESEM '07 Proceedings of the First International Symposium on Empirical Software Engineering and Measurement
IEEE Transactions on Software Engineering
A Multivariate Analysis of Static Code Attributes for Defect Prediction
QSIC '07 Proceedings of the Seventh International Conference on Quality Software
An Application of a Rule-Based Model in Software Quality Classification
ICMLA '07 Proceedings of the Sixth International Conference on Machine Learning and Applications
Predicting Subsystem Failures using Dependency Graph Complexities
ISSRE '07 Proceedings of the The 18th IEEE International Symposium on Software Reliability
The influence of organizational structure on software quality: an empirical case study
Proceedings of the 30th international conference on Software engineering
Comparing design and code metrics for software quality prediction
Proceedings of the 4th international workshop on Predictor models in software engineering
Implications of ceiling effects in defect predictors
Proceedings of the 4th international workshop on Predictor models in software engineering
Nearest neighbor sampling for cross company defect predictors: abstract only
DEFECTS '08 Proceedings of the 2008 workshop on Defects in large software systems
Ensemble of software defect predictors: a case study
Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement
Empirical Software Engineering
IEEE Transactions on Software Engineering
Can developer-module networks predict failures?
Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering
Predicting failures with developer networks and social network analysis
Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering
Software Defect Prediction Using Call Graph Based Ranking (CGBR) Framework
SEAA '08 Proceedings of the 2008 34th Euromicro Conference Software Engineering and Advanced Applications
PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering
Putting It All Together: Using Socio-technical Networks to Predict Failures
ISSRE '09 Proceedings of the 2009 20th International Symposium on Software Reliability Engineering
Information and Software Technology
Building decision tree software quality classification models using genetic programming
GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartII
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
IEEE Transactions on Neural Networks
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Programmer-based fault prediction
Proceedings of the 6th International Conference on Predictive Models in Software Engineering
Proceedings of the 2nd International Workshop on Emerging Trends in Software Metrics
Human judgement and software metrics: vision for the future
Proceedings of the 2nd International Workshop on Emerging Trends in Software Metrics
An algorithmic approach to missing data problem in modeling human aspects in software development
Proceedings of the 9th International Conference on Predictive Models in Software Engineering
Hi-index | 0.00 |
The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers' confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation.