The mathematical foundations of learning machines
The mathematical foundations of learning machines
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Automatic Capacity Tuning of Very Large VC-Dimension Classifiers
Advances in Neural Information Processing Systems 5, [NIPS Conference]
On the Computational Power of Neural Microcircuit Models: Pointers to the Literature
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
A New Approach towards Vision Suggested by Biologically Realistic Neural Microcircuit Models
BMCV '02 Proceedings of the Second International Workshop on Biologically Motivated Computer Vision
Links between perceptrons, MLPs and SVMs
ICML '04 Proceedings of the twenty-first international conference on Machine learning
On the computational power of circuits of spiking neurons
Journal of Computer and System Sciences
Isolated word recognition with the liquid state machine: a case study
Information Processing Letters - Special issue on applications of spiking neural networks
Computers and Industrial Engineering - Special issue: Computational intelligence and information technology applications to industrial engineering selected papers from the 33 rd ICC&IE
Fuzzy integral-based perceptron for two-class pattern classification problems
Information Sciences: an International Journal
K-T.R.A.C.E: A kernel k-means procedure for classification
Computers and Operations Research
On the generalization error of fixed combinations of classifiers
Journal of Computer and System Sciences
Expert Systems with Applications: An International Journal
Computers and Industrial Engineering
Isolated word recognition with the Liquid State Machine: a case study
Information Processing Letters - Special issue on applications of spiking neural networks
Discriminant parallel perceptrons
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Pattern recognition using a recurrent neural network inspired on the olfactory bulb
IWINAC'11 Proceedings of the 4th international conference on Interplay between natural and artificial computation: new challenges on bioinspired applications - Volume Part II
Parallel perceptrons, activation margins and imbalanced training set pruning
IbPRIA'05 Proceedings of the Second Iberian conference on Pattern Recognition and Image Analysis - Volume Part II
Boosting parallel perceptrons for label noise reduction in classification problems
IWINAC'05 Proceedings of the First international work-conference on the Interplay Between Natural and Artificial Computation conference on Artificial Intelligence and Knowledge Engineering Applications: a bioinspired approach - Volume Part II
Balanced boosting with parallel perceptrons
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Hi-index | 0.00 |
A learning algorithm is presented for circuits consisting of a single layer of perceptrons. We refer to such circuits as parallel perceptrons. In spite of their simplicity,these circuits are universal approximators for arbitrary boolean and continuous functions. In contrast to backprop for multi-layer perceptrons,our new learning algorithm - the parallel delta rule (p-delta rule) - only has to tune a single layer of weights,and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for such circuits such as those traditionally used for MADALINE. A theoretical analysis shows that the p-delta rule does in fact implement gradient descent - with regard to a suitable error measure - although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. Thus our algorithm also provides an interesting new hypothesis for the organization of learning in biological neural systems.