Symbolic and Neural Learning Algorithms: An Experimental Comparison

  • Authors:
  • Jude W. Shavlik;Raymond J. Mooney;Geoffrey G. Towell

  • Affiliations:
  • Computer Sciences Department, University of Wisconsin, 1210 West Dayton Street, Madison, WI 53706. SHAVLIK@CS.WISC.EDU;Department of Computer Sciences, Taylor Hall 2.124, University of Texas, Austin, TX 78712. MOONEY@CS.UTEXAS.EDU;Computer Sciences Department, University of Wisconsin, Madison, WI 53706. TOWELL@CS.WISC.EDU

  • Venue:
  • Machine Learning
  • Year:
  • 1991

Quantified Score

Hi-index 0.01

Visualization

Abstract

Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perception and backpropagation neural learning algorithms have been performed using five large, real-world data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a “distributed” output encoding.