Evolutionary selection extreme learning machine optimization for regression

  • Authors:
  • Guorui Feng;Zhenxing Qian;Xinpeng Zhang

  • Affiliations:
  • Shanghai University, School of Communication and Information Engineering, 200072, Shanghai, China;Shanghai University, School of Communication and Information Engineering, 200072, Shanghai, China;Shanghai University, School of Communication and Information Engineering, 200072, Shanghai, China

  • Venue:
  • Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Extreme Learning Machines (ELM 2011) Hangzhou, China, December 6 – 8, 2011
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.