Teaching feed-forward neural networks by simulated annealing
Complex Systems
Neurocomputing
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
On-line learning in neural networks
On-line learning in neural networks
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Local Search in Combinatorial Optimization
Local Search in Combinatorial Optimization
Introduction to Stochastic Search and Optimization
Introduction to Stochastic Search and Optimization
Scaled stochastic methods for training neural networks
Scaled stochastic methods for training neural networks
Analog computation and learning in vlsi
Analog computation and learning in vlsi
Simultaneous optimization of neural network function and architecture algorithm
Decision Support Systems
Numerical Solution of Partial Differential Equations: An Introduction
Numerical Solution of Partial Differential Equations: An Introduction
Oversearching and layered search in empirical learning
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Support vector neural training
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Visualization of learning in multilayer perceptron networks using principal component analysis
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Uncertainty of data, fuzzy membership functions, and multilayer perceptrons
IEEE Transactions on Neural Networks
Training neural nets with the reactive tabu search
IEEE Transactions on Neural Networks
Temperature prediction in electric arc furnace with neural network tree
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
Evolutionary optimization of regression model ensembles in steel-making process
IDEAL'11 Proceedings of the 12th international conference on Intelligent data engineering and automated learning
A new approach to neural network based stock trading strategy
IDEAL'11 Proceedings of the 12th international conference on Intelligent data engineering and automated learning
Neural network committees optimized with evolutionary methods for steel temperature control
ICCCI'11 Proceedings of the Third international conference on Computational collective intelligence: technologies and applications - Volume Part I
Evolutionary optimized forest of regression trees: application in metallurgy
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part I
HAIS'12 Proceedings of the 7th international conference on Hybrid Artificial Intelligent Systems - Volume Part II
Hi-index | 0.01 |
A new class of search-based training algorithms for feedforward networks is introduced. These algorithms do not calculate analytical gradients and they do not use stochastic or genetic search techniques. The forward step is performed to calculate error in response to localized weight changes using systematic search techniques. One of the simplest variants of this type of algorithms, the variable step search (VSS) algorithm, is studied in details. The VSS search procedure changes one network parameter at a time and thus does not impose any restrictions on the network structure or the type of transfer functions. Rough approximation to the gradient direction and the determination of the optimal step along this direction to find the minimum of cost function are performed simultaneously. Modifying the value of a single weight changes the signals only in a small fragment of the network, allowing for efficient calculation of contributions to errors. Several heuristics are discussed to increase the efficiency of VSS algorithm. Tests on benchmark data show that VSS performs not worse and sometimes even significantly better than such renown algorithms as the Levenberg-Marquardt or the scaled conjugate gradient.