Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combination of Multiple Classifiers Using Local Accuracy Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Meta Analysis of Classification Algorithms for Pattern Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
MultiBoosting: A Technique for Combining Boosting and Wagging
Machine Learning
Classifier ensemble selection using hybrid genetic algorithms
Pattern Recognition Letters
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
Is bagging effective in the classification of small-sample genomic and proteomic data?
EURASIP Journal on Bioinformatics and Systems Biology - Special issue on applications of signal procesing techniques to bioinformatics, genomics, and proteomics
Computational Statistics & Data Analysis
A hybrid feature selection method for DNA microarray data
Computers in Biology and Medicine
Gene selection and classification using Taguchi chaotic binary particle swarm optimization
Expert Systems with Applications: An International Journal
On-line multi-stage sorting algorithm for agriculture products
Pattern Recognition
Journal of Intelligent and Robotic Systems
Hi-index | 0.01 |
In this paper, we compare the performances of classifier combination methods (bagging, modified random subspace method, classifier selection, parametric fusion) to logistic regression in consideration of various characteristics of input data. Four factors used to simulate the logistic model are: (a) combination function among input variables, (b) correlation between input variables, (c) variance of observation, and (d) training data set size. In view of typically unknown combination function among input variables, we use a Taguchi design to improve the practicality of our study results by letting it as an uncontrollable factor. Our experimental study results indicate the following: when training set size is large, performances of logistic regression and bagging are not significantly different. However, when training set size is small, the performance of logistic regression is worse than bagging. When training data set size is small and correlation is strong, both modified random subspace method and bagging perform better than the other three methods. When correlation is weak and variance is small, both parametric fusion and classifier selection algorithm appear to be the worst at our disappointment.