Approximating the Semantics of Logic Programs by Recurrent Neural Networks

  • Authors:
  • Steffen Hölldobler;Yvonne Kalinke;Hans-Peter Störr

  • Affiliations:
  • Artificial Intelligence Institute, Computer Science Department, Dresden University of Technology, D-01062 Dresden, Germany. sh@inf.tu-dresden.de;Neurocomputing Research Center, Queensland University of Technology, Brisbane, Australia, G.P.O. Box 2434, QLD 4001. yvonne@fit.qut.edu.au;Artificial Intelligence Institute, Computer Science Department, Dresden University of Technology, D-01062 Dresden, Germany. haps@inf.tu-dresden.de

  • Venue:
  • Applied Intelligence
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

In [1] we have shown how to construct a 3-layeredrecurrent neural network that computes the fixed point of the meaningfunction T_P of a given propositional logic program {\cal P}, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define anotion of approximation for interpretations and prove that thereexists a 3-layered feed forward neural network that approximatesthe calculation of T_P for a given first order acyclic logic program P with an injective level mapping arbitrarily well.Extending the feed forward network by recurrent connections we obtaina recurrent neural network whose iteration approximates the fixedpoint of T_P. This result is proven by taking advantage ofthe fact that for acyclic logic programs the function T_P is a contraction mapping on a complete metric space defined by theinterpretations of the program. Mapping this space to the metricspace R with Euclidean distance, a real valued functionf_P can be defined which corresponds to T_P and iscontinuous as well as a contraction. Consequently it can beapproximated by an appropriately chosen class of feed forward neuralnetworks.