Let a biogeography-based optimizer train your Multi-Layer Perceptron

  • Authors:
  • Seyedali Mirjalili;Seyed Mohammad Mirjalili;Andrew Lewis

  • Affiliations:
  • School of Information and Communication Technology, Griffith University, Nathan, Brisbane, QLD 4111, Australia;Zharfa Pajohesh System (ZPS) Co., Unit 5, NO. 30, West 208 St., Third Sq. Tehranpars, P.O. Box: 1653745696, Tehran, Iran;School of Information and Communication Technology, Griffith University, Nathan, Brisbane, QLD 4111, Australia

  • Venue:
  • Information Sciences: an International Journal
  • Year:
  • 2014

Quantified Score

Hi-index 0.07

Visualization

Abstract

The Multi-Layer Perceptron (MLP), as one of the most-widely used Neural Networks (NNs), has been applied to many practical problems. The MLP requires training on specific applications, often experiencing problems of entrapment in local minima, convergence speed, and sensitivity to initialization. This paper proposes the use of the recently developed Biogeography-Based Optimization (BBO) algorithm for training MLPs to reduce these problems. In order to investigate the efficiencies of BBO in training MLPs, five classification datasets, as well as six function approximation datasets are employed. The results are compared to five well-known heuristic algorithms, Back Propagation (BP), and Extreme Learning Machine (ELM) in terms of entrapment in local minima, result accuracy, and convergence rate. The results show that training MLPs by using BBO is significantly better than the current heuristic learning algorithms and BP. Moreover, the results show that BBO is able to provide very competitive results in comparison with ELM.