MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Lazy Learning of Bayesian Rules
Machine Learning
The Need for Low Bias Algorithms in Classification Learning from Large Data Sets
PKDD '02 Proceedings of the 6th European Conference on Principles of Data Mining and Knowledge Discovery
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Anytime classification for a pool of instances
Machine Learning
Hi-index | 0.00 |
Averaged n -Dependence Estimators (An DE) is a family of learning algorithms that range from low variance coupled with high bias through to high variance coupled with low bias. The asymptotic error of the lowest bias variant is the Bayes optimal. The An DE family of algorithms have a training time that is linear with respect to the training examples, learn in a single pass through the data, support incremental learning, handle missing values directly and are robust in the face of noise. These characteristics make the algorithms particularly well suited to learning from large data. However, for higher orders of n they are very computationally demanding. This paper presents data structures and algorithms developed to reduce both memory and time for training and classification. These enhancements have enabled the evaluation and comparison of A3DE's effectiveness. The results provide further support for the hypothesis that as the number of training examples increases, decreasing error will be attained by members of the An DE family with increasing levels of n .