A parallel learning approach for neural network ensemble

  • Authors:
  • Zheng-Qun Wang;Shi-Fu Chen;Zhao-Qian Chen;Jun-Yuan Xie

  • Affiliations:
  • State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, P.R China;State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, P.R China;State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, P.R China;State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, P.R China

  • Venue:
  • AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

A component neural networks parallel training algorithm PLA is proposed, which encourages component neural network to learn from expected goal and the others, so all component neural networks are trained simultaneously and interactively In the stage of combining component neural networks, we provide a parallel weight optimal approach GASEN-e by expanding GASEN proposed by Zhou et al, which assign weight for every component neural network and bias for their ensemble Experiment results show that a neural networks ensemble system is efficient constructed by PLA and GASEN-e.