Numerical recipes: the art of scientific computing
Numerical recipes: the art of scientific computing
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning
The Journal of Machine Learning Research
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds
IEEE Transactions on Pattern Analysis and Machine Intelligence
Compact approximations to Bayesian predictive distributions
ICML '05 Proceedings of the 22nd international conference on Machine learning
Hi-index | 0.01 |
We present a learning algorithm for nominal vector data. It builds a complex classifier by adding iteratively a simple function that modifies the current classifier. In order to limit overtraining problem we focus on a class of such functions for which optimal Bayesian learning is tractable. We investigate a few classes of functions that yield to models that are similar to Nai@?ve Bayes and logistic classification. We report experimental results for a collection of standard data sets that show that our learning algorithm outperforms standard learning of such these standard models.