Representing, learning and extracting temporal knowledge from neural networks: a case study

  • Authors:
  • Rafael V. Borges;Artur D'Avila Garcez;Luis C. Lamb

  • Affiliations:
  • Department of Computing, City University London;Department of Computing, City University London;Institute of Informatics, Federal University of Rio Grande do Sul

  • Venue:
  • ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The integration of knowledge representation, reasoning and learning into a robust and computationally effective model is a key challenge in Artificial Intelligence. Temporal models are fundamental to describe the behaviour of computing and information systems. In addition, acquiring the description of the desired behaviour of a system is a complex task in several AI domains. In this paper, we evaluate a neural framework capable of adapting temporal models according to properties, and also learning through observation of examples. In this framework, a symbolically described model is translated into a recurrent neural network, and algorithms are proposed to integrate learning, both from examples and from properties. In the end, the knowledge is again symbolically represented, incorporating both initial model and learned specification, as shown by our case study. The case study illustrates how the integration of methodologies and principles from distinct AI areas can be relevant to build robust intelligent systems.