Multiple classifier application to credit risk assessment
Expert Systems with Applications: An International Journal
On crossing numbers of geometric proximity graphs
Computational Geometry: Theory and Applications
International Journal of Learning Technology
Combining Supervised Learning Techniques to Key-Phrase Extraction for Biomedical Full-Text
International Journal of Intelligent Information Technologies
Impact of noise on credit risk prediction: Does data quality really matter?
Intelligent Data Analysis
Hi-index | 0.00 |
Probability theory is the framework for making decision under uncertainty. In classification, Bayes' rule is used to calculate the probabilities of the classes and it is a big issue how to classify raw data rationally to minimize ex- pected risk. Bayesian theory can roughly be boiled down to one principle: to see the future, one must look at the past. Naive Bayes classifier is one of the mostly used practical Bayesian learning methods. K-Nearest Neighbor is a super- vised learning algorithm where the result of new instance query is classified based on majority of K-Nearest Neigh- bor category. The classifiers do not use any model to fit and only based on memory/ training data. In this paper, after re- viewing Bayesian theory the Naive Bayes classifier and K- Nearest Neighbor classifier is implemented and applied to a dataset "Credit card approval" application. Eventually the performance of these two classifiers is observed on this ap- plication in terms of the correct classification and misclas- sification and how the performance of K-Nearest Neighbor classifier can be improved by varying the value of k.