Back propagation separates where perceptrons do
Neural Networks
On the Problem of Local Minima in Backpropagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural network design
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Time Series Analysis, Forecasting and Control
Time Series Analysis, Forecasting and Control
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
On the uniqueness of weights in single-layer perceptrons
IEEE Transactions on Neural Networks
Local Modeling Using Self-Organizing Maps and Single Layer Neural Networks
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Parameter estimation of sigmoid superpositions: dynamical system approach
Neural Computation
A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis
The Journal of Machine Learning Research
Conversion methods for symbolic features: A comparison applied to an intrusion detection problem
Expert Systems with Applications: An International Journal
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Privacy-Preserving Distributed Learning Based on Genetic Algorithms and Artificial Neural Networks
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part II: Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living
Combining Feature Selection and Local Modelling in the KDD Cup 99 Dataset
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
A new supervised local modelling classifier based on information theory
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A fast semi-linear backpropagation learning algorithm
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
A linear learning method for multilayer perceptrons using least-squares
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Fault prognosis of mechanical components using on-line learning neural networks
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
A privacy-preserving distributed and incremental learning method for intrusion detection
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
Local modeling classifier for microarray gene-expression data
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
An incremental learning method for neural networks based on sensitivity analysis
CAEPIA'09 Proceedings of the Current topics in artificial intelligence, and 13th conference on Spanish association for artificial intelligence
Expert Systems with Applications: An International Journal
A fast classification algorithm based on local models
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
The article presents a method for learning the weights in one-layer feed-forward neural networks minimizing either the sum of squared errors or the maximum absolute error, measured in the input scale. This leads to the existence of a global optimum that can be easily obtained solving linear systems of equations or linear programming problems, using much less computational power than the one associated with the standard methods. Another version of the method allows computing a large set of estimates for the weights, providing robust, mean or median, estimates for them, and the associated standard errors, which give a good measure for the quality of the fit. Later, the standard one-layer neural network algorithms are improved by learning the neural functions instead of assuming them known. A set of examples of applications is used to illustrate the methods. Finally, a comparison with other high-performance learning algorithms shows that the proposed methods are at least 10 times faster than the fastest standard algorithm used in the comparison.