Combining the results of several neural network classifiers
Neural Networks
The nature of statistical learning theory
The nature of statistical learning theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experimental evaluation of expert fusion strategies
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Theoretical Study on Six Classifier Fusion Strategies
IEEE Transactions on Pattern Analysis and Machine Intelligence
Content-Based Ima e Orientation Detection with Support Vector Machines
CBAIVL '01 Proceedings of the IEEE Workshop on Content-based Access of Image and Video Libraries (CBAIVL'01)
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Support Vector Machines versus Decision Templates in Biomedical Decision Fusion
ICMLA '08 Proceedings of the 2008 Seventh International Conference on Machine Learning and Applications
Linear combiners for classifier fusion: some theoretical and experimental results
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Hi-index | 0.00 |
The need for accurate, robust, optimised classification systems has been driving information fusion methodology towards a state of early maturity throughout the last decade. Among its shortcomings we identify the lack of statistical foundation in many ad-hoc fusion methods and the lack of strong non-linear combiners with the capacity to partition complex decision spaces. In this work, we draw parallels between the well known decision templates (DT) fusion method and the nearest mean distance classifier in order to extract a useful formulation for the overall expected classification error. Additionally we evaluate DTs against a support vector machine (SVM) discriminant hyper-classifier, using two benchmark biomedical datasets. Beyond measuring performance statistics, we advocate the theoretical advantages of support vectors as multiple attractor points in a hyper-classifier's feature space.