Rescaling of variables in back propagation learning
Neural Networks
Second order properties of error surfaces: learning time and generalization
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Neural network design
Parameter adaptation in stochastic optimization
On-line learning in neural networks
Learning from Data: Concepts, Theory, and Methods
Learning from Data: Concepts, Theory, and Methods
A global optimum approach for one-layer neural networks
Neural Computation
Fast curvature matrix-vector products for second-order gradient descent
Neural Computation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Linear least-squares based methods for neural networks learning
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Sensitivity analysis in discrete Bayesian networks
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Circular backpropagation networks for classification
IEEE Transactions on Neural Networks
Feature Selection Based on Sensitivity Analysis
Current Topics in Artificial Intelligence
EURASIP Journal on Wireless Communications and Networking
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
Control of unstable nonlinear and nonstationary systems using LAMSTAR neural networks
ISC '07 Proceedings of the 10th IASTED International Conference on Intelligent Systems and Control
CA '07 Proceedings of the Ninth IASTED International Conference on Control and Applications
Reduction of power envelope fluctuations in OFDM signals by using neural networks
IEEE Communications Letters
Customer Validation of Commercial Predictive Models
Proceedings of the 2010 conference on Data Mining for Business Applications
An incremental learning method for neural networks based on sensitivity analysis
CAEPIA'09 Proceedings of the Current topics in artificial intelligence, and 13th conference on Spanish association for artificial intelligence
A self-organizing fuzzy neural network based on a growing-and-pruning algorithm
IEEE Transactions on Fuzzy Systems
Modeling the correlations of crude oil properties based on sensitivity based linear learning method
Engineering Applications of Artificial Intelligence
A second-order learning algorithm for computing optimal regulatory pathways
PerMIn'12 Proceedings of the First Indo-Japan conference on Perception and Machine Intelligence
ICCCI'12 Proceedings of the 4th international conference on Computational Collective Intelligence: technologies and applications - Volume Part I
Expert Systems with Applications: An International Journal
Advances in Fuzzy Systems - Special issue on High Performance Fuzzy Systems for Real World Problems
Hi-index | 0.00 |
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, which use the weights in each of the layers; the process is repeated until convergence. Since these weights are learnt solving a linear system of equations, there is an important saving in computational time. The method also gives the local sensitivities of the least square errors with respect to input and output data, with no extra computational cost, because the necessary information becomes available without extra calculations. This method, called the Sensitivity-Based Linear Learning Method, can also be used to provide an initial set of weights, which significantly improves the behavior of other learning algorithms. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with several learning algorithms and well known data sets. The results have shown a learning speed generally faster than other existing methods. In addition, it can be used as an initialization tool for other well known methods with significant improvements.