On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Robust Classification for Imprecise Environments
Machine Learning
Learning and making decisions when costs and probabilities are both unknown
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
The Case against Accuracy Estimation for Comparing Induction Algorithms
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Orthogonally rotational transformation for naive bayes learning
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
Hi-index | 0.00 |
In this paper we apply the weight of evidence reformulation of AdaBoosted naive Bayes scoring due to Ridgeway et al. (1998) for the diagnosis of insurance claim fraud. The method effectively combines the advantages of boosting and the modelling power and representational attractiveness of the probabilistic weight of evidence scoring framework. We present the results of an experimental comparison with an emphasis on both discriminatory power and calibration of probability estimates. The data on which we evaluate the method consists of a representative set of closed personal injury protection automobile insurance claims from accidents that occurred in Massachusetts during 1993. The findings of the study reveal the method to be a valuable contribution to the design of effective, intelligible, accountable and efficient fraud detection support.