The utility of feature construction for back-propagation

  • Authors:
  • Harish Ragavan;Selwyn Piramuthu

  • Affiliations:
  • Beckman Institute, University of Illinois at Urbana-Champaign, Urbctna, IL;Beckman Institute, University of Illinois at Urbana-Champaign, Urbctna, IL

  • Venue:
  • IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

The ease of learning concepts from examples in empirical machine learning depends on the attributes used for describing the training data. We show that decision-tree based feature construction can be used to improve the performance of back-propagation (BP), an artificial neural network algorithm, both in terms of the convergence speed and the number of epochs taken by the BP algorithm to converge. We use disjunctive concepts to illustrate feature construction, and describe a measure of feature quality and concept difficulty. We show that a reduction in the difficulty of the concepts to be learned by constructing better representations increases the performance of BP considerably.