The cascade-correlation learning architecture
Advances in neural information processing systems 2
The nature of statistical learning theory
The nature of statistical learning theory
Game theory, on-line prediction and boosting
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Moderating the outputs of support vector machine classifiers
IEEE Transactions on Neural Networks
Fast classification in incrementally growing spaces
IbPRIA'11 Proceedings of the 5th Iberian conference on Pattern recognition and image analysis
Hi-index | 0.01 |
Neural networks have become very useful tools for input-output knowledge discovery. However, some of the most powerful schemes require very complex machines and, thus, a large amount of calculation. This paper presents a general technique to reduce the computational burden associated with the operational phase of most neural networks that calculate their output as a weighted sum of terms, which comprises a wide variety of schemes, such as Multi-Net or Radial Basis Function networks. Basically, the idea consists on sequentially evaluating the sum terms, using a series of thresholds which are associated with the confidence that a partial output will coincide with the overall network classification criterion. Furthermore, we design some procedures for conveniently sorting out the network units, so that the most important ones are evaluated first. The possibilities of this strategy are illustrated with some experiments on a benchmark of binary classification problems, using RealAdaboost and RBF networks, which show that important computational savings can be achieved without significant degradation in terms of recognition accuracy.