The Strength of Weak Learnability
Machine Learning
Empirical Learning as a Function of Concept Character
Machine Learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Instance-Based Learning Algorithms
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Recursive Automatic Bias Selection for Classifier Construction
Machine Learning - Special issue on bias evaluation and selection
Initializing RBF-networks with small subsets of training examples
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
The Pattern Recognition Basis of Artificial Intelligence
The Pattern Recognition Basis of Artificial Intelligence
Using Correspondence Analysis to Combine Classifiers
Machine Learning
Best-Case Results for Nearest-Neighbor Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Characterization of Classification Algorithms
EPIA '95 Proceedings of the 7th Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
Hi-index | 0.00 |
Given that no one classification method is the best in all tasks, a variety of approaches have evolved to prevent poor performance due to mismatch of capabilities. One approach to overcome this problem is to determine when a method may be appropriate for a given problem. A second, more popular approach is to combine the capabilities of two or more classification methods. This paper provides some evidence that the combining of classifiers can yield more robust solutions.