Balanced Learning for Ensembles with Small Neural Networks

  • Authors:
  • Yong Liu

  • Affiliations:
  • The University of Aizu, Aizu-Wakamatsu, Japan 965-8580

  • Venue:
  • ISICA '09 Proceedings of the 4th International Symposium on Advances in Computation and Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.02

Visualization

Abstract

By introducing an adaptive error function, a balanced ensemble learning had been developed from negative correlation learning. In this paper, balanced ensemble learning had been used to train a set of small neural networks with one hidden node only. The experimental results suggest that balanced ensemble learning is able to create a strong ensemble by combining a set of weak learners. Different to bagging and boosting where learners are trained on randomly re-sampled data from the original set of patterns, learners could be trained on all available data in balanced ensemble learning. It is interesting to be discovered that learners by balanced ensemble learning could be just be slightly better than random guessing even if they had been trained on the whole data set. Another difference among these ensemble learning methods is that learners are trained simultaneously in balanced ensemble learning when learners are trained independently in bagging, and sequentially in boosting.