The mathematical foundations of learning machines
The mathematical foundations of learning machines
Machine Learning
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Control Perspectives on Numerical Algorithms And Matrix Problems (Advances in Design and Control) (Advances in Design and Control 10)
Neural Computation
Adaptive Ho-Kashyap rules for perceptron training
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Perceptrons, proposed in the seminal paper McCulloch-Pitts of 1943, have remained of interest to neural network community because of their simplicity and usefulness in classifying linearly separable data and can be viewed as implementing iterative procedures for ''solving'' linear inequalities. Gradient descent and conjugate gradient methods, normally used for linear equalities, can be used to solve linear inequalities by simple modifications that have been proposed in the literature but not been analyzed completely. This paper applies a recently proposed control-inspired approach to the design of iterative steepest descent and conjugate gradient algorithms for perceptron training in batch mode, by regarding certain parameters of the training/algorithm as controls and then using a control Liapunov technique to choose appropriate values of these parameters.