Performance of derivative free search ANN training algorithm with time series and classification problems

  • Authors:
  • Shamsuddin Ahmed

  • Affiliations:
  • Graduate School of Business, The University of the South Pacific, Suva, Fiji

  • Venue:
  • Computational Statistics
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

A Manhattan search algorithm to minimize artificial neural network error function is outlined in this paper. From an existing position in Cartesian coordinate, a search vector moves in orthogonal directions to locate minimum function value. The search algorithm computes optimized step length for rapid convergence. This step is performed when consecutive search is successful in minimizing function value. The optimized step length identifies favorable descent direction to minimize function value. The search method is suitable for complex error surface where derivative information is difficult to obtain or when the error surface is nearly flat. The rate of change in function value is almost negligible near the flat surface. Most of the derivative based training algorithm faces difficulty in such scenarios. This algorithm avoids derivative information of an error function. Therefore, it is an attractive search method when derivative based algorithm faces difficulty due to complex ridges and flat valleys. In case the algorithm gets into trapped minimum, the search vector takes steps to move out of a local minimum by exploring neighborhood descent search directions. The algorithm differs from the first and second order derivative based training methods. To measure the performance of the algorithm, estimation of electric energy generation model from Fiji Islands and "L-T" letter recognition problems are solved. Bootstrap analysis shows that the algorithm's predictive and classification abilities are high. The algorithm is reliable when solution to a problem is unknown. Therefore, the algorithm identifies benchmark solution.