Learning in parallel networks: simulating learning in a probabilistic system
BYTE - Lecture notes in computer science Vol. 174
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Multilayer feedforward networks are universal approximators
Neural Networks
Feature construction: an analytic framework and an application to decision trees
Feature construction: an analytic framework and an application to decision trees
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Learning hard concepts through constructive induction: framework and rationale
Computational Intelligence
Learning in an abstraction space
Proceedings of a workshop on Computational learning theory and natural learning systems (vol. 1) : constraints and prospects: constraints and prospects
Machine Learning
Feature Construction for Back-Propagation
PPSN I Proceedings of the 1st Workshop on Parallel Problem Solving from Nature
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The ease of learning concepts from examples in empirical machine learning depends on the attributes used for describing the training data. We show that decision-tree based feature construction can be used to improve the performance of back-propagation (BP), an artificial neural network algorithm, both in terms of the convergence speed and the number of epochs taken by the BP algorithm to converge. We use disjunctive concepts to illustrate feature construction, and describe a measure of feature quality and concept difficulty. We show that a reduction in the difficulty of the concepts to be learned by constructing better representations increases the performance of BP considerably.