Learning translation invariant recognition in massively parallel networks
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
Regularization theory and neural networks architectures
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Time Series Analysis, Forecasting and Control
Time Series Analysis, Forecasting and Control
Hi-index | 0.00 |
In this work, a new supervised learning method for single layer neural networks based on a regularized cost function is presented. This method obtains the optimal weights and biases by solving a system of linear equations and therefore it is always guaranteed the global optimum solution. In order to verify the soundness of the proposed learning algorithm and to analyze the effect of the regularization term, two simulations, one for a classification problem and another for a regression problem, were performed. The obtained results demonstrated the validity of the method.