Layered memory using backward-chaining

  • Authors:
  • Joshua Gay

  • Affiliations:
  • Department of Mathematics and Computer Science, Suffolk University, Boston, MA

  • Venue:
  • CCSC '01 Proceedings of the sixth annual CCSC northeastern conference on The journal of computing in small colleges
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This is a progress report on my research project to design a model for layered memory in an intelligent agent. I am using the cognitive model of human memory [4] as the design reference. It defines three layers: sensory information storage (SIS); short-term memory (STM); and long-term memory (LTM). SIS processes sensory inputs and motor skills. It is fast but `brittle' in being difficult to reprogram. It can be implemented as neural networks. STM is the seat of rationality and goal-setting. It is relatively fast, but its storage is small (perhaps seven semantic `chunks'). It can be implemented as a knowledge-base (KB). LTM stores vivid sensory impressions, heuristic rules, and syntactic knowledge. Its storage is large but slow; reconstruction of STM state from LTM may be unreliable. It can be implemented as a database. The focus of my research at this point is the STM layer. A key feature of my STM model is backward-chaining inference. I present here a prototype of my layered memory model and the important points I have learned in implementing the STM layer.