Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Random Subspace Method for Constructing Decision Forests
IEEE Transactions on Pattern Analysis and Machine Intelligence
Spatial Representation of Dissimilarity Data via Lower-Complexity Linear and Nonlinear Mappings
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Combining One-Class Classifiers
MCS '01 Proceedings of the Second International Workshop on Multiple Classifier Systems
On Combining One-Class Classifiers for Image Database Retrieval
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Classifier Combinations: Implementations and Theoretical Issues
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
A generalized kernel approach to dissimilarity-based classification
The Journal of Machine Learning Research
Using diversity of errors for selecting members of a committee classifier
Pattern Recognition
Weighted ensemble boosting for robust activity recognition in video
Machine Graphics & Vision International Journal
Minimum spanning tree based one-class classifier
Neurocomputing
A classifier ensemble for face recognition using gabor wavelet features
CISIS'11 Proceedings of the 4th international conference on Computational intelligence in security for information systems
Feature representation selection based on Classifier Projection Space and Oracle analysis
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
In classifier combining, one tries to fuse the information that is given by a set of base classifiers. In such a process, one of the difficulties is howt o deal with the variability between classifiers. Although various measures and many combining rules have been suggested in the past, the problem of constructing optimal combiners is still heavily studied.In this paper, we discuss and illustrate the possibilities of classifier embedding in order to analyse the variability of base classifiers, as well as their combining rules. Thereby, a space is constructed in which classifiers can be represented as points. Such a space of a low dimensionality is a Classifier Projection Space (CPS). In the first instance, it is used to design a visual tool that gives more insight into the differences of various combining techniques. This is illustrated by some examples. In the end, we discuss how the CPS may also be used as a basis for constructing new combining rules.