Multiclass Classification with Pairwise Coupled Neural Networks or Support Vector Machines
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
No Unbiased Estimator of the Variance of K-Fold Cross-Validation
The Journal of Machine Learning Research
A Comparison of Decision Tree Ensemble Creation Techniques
IEEE Transactions on Pattern Analysis and Machine Intelligence
Transductive Methods for the Distributed Ensemble Classification Problem
Neural Computation
Nonlinear Boosting Projections for Ensemble Construction
The Journal of Machine Learning Research
Training Classifiers for Tree-structured Categories with Partially Labeled Data
Journal of VLSI Signal Processing Systems
An Extension of Iterative Scaling for Decision and Data Aggregation in Ensemble Classification
Journal of VLSI Signal Processing Systems
Handling class imbalance in customer churn prediction
Expert Systems with Applications: An International Journal
A generic multi-dimensional feature extraction method using multiobjective genetic programming
Evolutionary Computation
Incremental construction of classifier and discriminant ensembles
Information Sciences: an International Journal
A novel method for constructing ensemble classifiers
Statistics and Computing
Comparison of Bagging and Boosting Algorithms on Sample and Feature Weighting
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Ant Clustering Using Ensembles of Partitions
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Probabilistic classification vector machines
IEEE Transactions on Neural Networks
The influence of mutation on population dynamics in multiobjective genetic programming
Genetic Programming and Evolvable Machines
KDECB'06 Proceedings of the 1st international conference on Knowledge discovery and emergent complexity in bioinformatics
A comparison of model aggregation methods for regression
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Mining data with random forests: A survey and results of new tests
Pattern Recognition
The Knowledge Engineering Review
Multiple Kernel Learning Algorithms
The Journal of Machine Learning Research
Expert Systems with Applications: An International Journal
Exploiting depth information for indoor-outdoor scene classification
ICIAP'11 Proceedings of the 16th international conference on Image analysis and processing - Volume Part II
Eigenclassifiers for combining correlated classifiers
Information Sciences: an International Journal
Hellinger distance decision trees are robust and skew-insensitive
Data Mining and Knowledge Discovery
Combining diverse one-class classifiers
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Localized algorithms for multiple kernel learning
Pattern Recognition
Oversampling methods for classification of imbalanced breast cancer malignancy data
ICCVG'12 Proceedings of the 2012 international conference on Computer Vision and Graphics
Kernel Factory: An ensemble of kernel machines
Expert Systems with Applications: An International Journal
Diversity measures for one-class classifier ensembles
Neurocomputing
Cost-sensitive decision tree ensembles for effective imbalanced classification
Applied Soft Computing
Multi-level clustering support vector machine trees for improved protein local structure prediction
International Journal of Data Mining and Bioinformatics
Clustering-based ensembles for one-class classification
Information Sciences: an International Journal
Component-based decision trees for classification
Intelligent Data Analysis
Hi-index | 0.01 |
Dietterich (1998) reviews five statistical tests and proposes the 5 × 2 cv t test for determining whether there is a significant difference between the error rates of two classifiers. In our experiments, we noticed that the 5 × 2 cv t test result may vary depending on factors that should not affect the test, and we propose a variant, the combined 5 ×2 cv F test, that combines multiple statistics to get a more robust test. Simulation results show that this combined version of the test has lower type I error and higher power than 5 × 2 cv proper.