The cascade-correlation learning architecture
Advances in neural information processing systems 2
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Backpropagation: basics and new developments
The handbook of brain theory and neural networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models
Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models
Time Series Analysis, Forecasting and Control
Time Series Analysis, Forecasting and Control
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Fast Pattern Classification of Ventricular Arrhythmias Using Graphics Processing Units
CIARP '09 Proceedings of the 14th Iberoamerican Conference on Pattern Recognition: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
GPU implementation of the multiple back-propagation algorithm
IDEAL'09 Proceedings of the 10th international conference on Intelligent data engineering and automated learning
A hybrid face recognition approach using GPUMLib
CIARP'10 Proceedings of the 15th Iberoamerican congress conference on Progress in pattern recognition, image analysis, computer vision, and applications
Hi-index | 0.00 |
A new class of Neural Networks (NN), designated the Multiple Feed-Forward (MFF) networks, and a new gradient-based learning algorithm, Multiple Back-Propagation (MBP), are proposed and analyzed. MFF are obtained by integrating two feed-forward networks (a main network and a space network) in a novel manner. A major characteristic is their ability to partition the input space by using selective neurons, whose actuation role is captured through the space localisation of input pattern data. In this sense, only those neurons fired by a particular data point turn out to be relevant, while they retain the capacity to approximate closely to more general, irregular, non-linear features in localized regions. Together, the MFF networks and the MBP algorithm embody a new neural architecture, ensuring, in most cases, a better design choice than the one provided by the Multi-Layer Perceptron (MLP) networks trained with the Back-Propagation (BP) algorithm. The utilization of computable importance factors for the actuation neurons whose relative magnitudes are derived from the space network properties and the training data is the key reason for its ability to decompose the underlying mapping function into simpler sub-functions requiring parsimonious NN. Experimental results on benchmarks confirm improved efficiency of the gradient-based learning algorithm proposed, borne out by better generalization and in most cases by shorter training times for online learning, as compared with the MLP networks trained with the BP algorithm.