Using symbolic learning to improve knowledge-based neural networks

  • Authors:
  • Geoffrey G. Towell;Jude W. Shavlik

  • Affiliations:
  • Siemens Corporate Research, Princeton, NJ and University of Wisconsin, Madison, Wisconsin;University of Wisconsin, Madison, Wisconsin

  • Venue:
  • AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

The previously-described KBANN system integrates existing knowledge into neural networks by defining the network topology and setting initial link weights. Standard neural learning techniques can then be used to train such networks, thereby refining the information upon which the network is based. However, standard neural learning techniques are reputed to have difficulty training networks with multiple layers of hidden units; KBANN commonly creates such networks. In addition, standard neural learning techniques ignore some of the information contained in the networks created by KBANN. This paper describes a symbolic inductive learning algorithm for training such networks that uses this previously-ignored information and which helps to address the problems of training "deep" networks. Empirical evidence shows that this method improves not only learning speed, but also the ability of networks to generalize correctly to testing examples.