Neurocomputing
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Stochastic discrete optimization
SIAM Journal on Control and Optimization
Improving the convergence of the back-propagation algorithm
Neural Networks
Accelerating backpropagation through dynamic self-adaptation
Neural Networks
`` Direct Search'' Solution of Numerical and Statistical Problems
Journal of the ACM (JACM)
Accelerating neural network training using weight extrapolations
Neural Networks
The Sample Average Approximation Method for Stochastic Discrete Optimization
SIAM Journal on Optimization
On the Convergence of Pattern Search Algorithms
SIAM Journal on Optimization
Finding Optimal Algorithmic Parameters Using Derivative-Free Optimization
SIAM Journal on Optimization
Discrete Optimization via Simulation Using COMPASS
Operations Research
Geometry of interpolation sets in derivative free optimization
Mathematical Programming: Series A and B
Optimizing back-propagation networks via a calibrated heuristic algorithm with an orthogonal array
Expert Systems with Applications: An International Journal
Editorial: Neural networks: Algorithms and applications
Neurocomputing
Expert Systems with Applications: An International Journal
Variable step search algorithm for feedforward networks
Neurocomputing
Benchmarking Derivative-Free Optimization Algorithms
SIAM Journal on Optimization
Determining regularization parameters for derivative free neural learning
MLDM'05 Proceedings of the 4th international conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.00 |
A Manhattan search algorithm to minimize artificial neural network error function is outlined in this paper. From an existing position in Cartesian coordinate, a search vector moves in orthogonal directions to locate minimum function value. The search algorithm computes optimized step length for rapid convergence. This step is performed when consecutive search is successful in minimizing function value. The optimized step length identifies favorable descent direction to minimize function value. The search method is suitable for complex error surface where derivative information is difficult to obtain or when the error surface is nearly flat. The rate of change in function value is almost negligible near the flat surface. Most of the derivative based training algorithm faces difficulty in such scenarios. This algorithm avoids derivative information of an error function. Therefore, it is an attractive search method when derivative based algorithm faces difficulty due to complex ridges and flat valleys. In case the algorithm gets into trapped minimum, the search vector takes steps to move out of a local minimum by exploring neighborhood descent search directions. The algorithm differs from the first and second order derivative based training methods. To measure the performance of the algorithm, estimation of electric energy generation model from Fiji Islands and "L-T" letter recognition problems are solved. Bootstrap analysis shows that the algorithm's predictive and classification abilities are high. The algorithm is reliable when solution to a problem is unknown. Therefore, the algorithm identifies benchmark solution.