The Strength of Weak Learnability
Machine Learning
Instance-Based Learning Algorithms
Machine Learning
Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms
International Journal of Man-Machine Studies - Special issue: symbolic problem solving in noisy and novel task environments
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural network credit scoring models
Computers and Operations Research - Neural networks in business
An Enhanced Neural Network Technique for Software Risk Analysis
IEEE Transactions on Software Engineering
Choosing k for two-class nearest neighbour classifiers with unbalanced classes
Pattern Recognition Letters
ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
Investigating the Performance of Naive- Bayes Classifiers and K- Nearest Neighbor Classifiers
ICCIT '07 Proceedings of the 2007 International Conference on Convergence Information Technology
Good methods for coping with missing data in decision trees
Pattern Recognition Letters
Boosting and other ensemble methods
Neural Computation
Building credit scoring models using genetic programming
Expert Systems with Applications: An International Journal
The construction of an individual credit risk assessment method: based on the combination algorithms
ICICA'10 Proceedings of the First international conference on Information computing and applications
Driving factors' identification based on derivative process of subjective default credit risk
ICICA'10 Proceedings of the First international conference on Information computing and applications
Dynamic classifier ensemble model for customer classification with imbalanced class distribution
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Exploring the behaviour of base classifiers in credit scoring ensembles
Expert Systems with Applications: An International Journal
Two-level classifier ensembles for credit risk assessment
Expert Systems with Applications: An International Journal
Probabilistic and discriminative group-wise feature selection methods for credit risk analysis
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
A survey of multiple classifier systems as hybrid systems
Information Fusion
A loan default discrimination model using cost-sensitive support vector machine improved by PSO
Information Technology and Management
Hi-index | 12.06 |
Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how such accuracy could be improved by using classifier ensembles. Benchmarking results on four credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy.