The cascade-correlation learning architecture
Advances in neural information processing systems 2
Original Contribution: Stacked generalization
Neural Networks
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Machine Learning
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Machine Learning
Induction of Recursive Bayesian Classifiers
ECML '93 Proceedings of the European Conference on Machine Learning
Editorial: special issue on learning from imbalanced data sets
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Efficient lazy elimination for averaged one-dependence estimators
ICML '06 Proceedings of the 23rd international conference on Machine learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Adaptive mixtures of local experts
Neural Computation
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Issues in stacked generalization
Journal of Artificial Intelligence Research
Weightily averaged one-dependence estimators
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
An analysis of Bayesian classifiers
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Induction of selective Bayesian classifiers
UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Naïve Bayes (NB) is an efficient and effective classifier in many cases However, NB might suffer from poor performance when its conditional independence assumption is violated While most recent research focuses on improving NB by alleviating the conditional independence assumption, we propose a new Meta learning technique to scale up NB by assuming an altered strategy to the traditional Cascade Learning (CL) The new Meta learning technique is more effective than the traditional CL and other Meta learning techniques such as Bagging and Boosting techniques while maintaining the efficiency of Naïve Bayes learning.