Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning

  • Authors:
  • Dmitri A. Rachkovskij;Ernst M. Kussul

  • Affiliations:
  • V.M. Glushkov Cybernetics Center, Pr. Acad. Glushkova 40, Kiev 22, 252022, Ukraine;Centro de Instrumentos, Universidad Nacional Autonoma de Mexico, 04510 Mexico D.F., Mexico

  • Venue:
  • Neural Computation
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's holographic reduced representations and Kanerva's binary spatter codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this article we consider procedures of the context-dependent thinning developed for representation of complex hierarchical items in the architecture of associative-projective neural networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, context-dependent thinning preserves the same low density (or sparseness) of the bound codevector for a varied number of component codevectors. Besides, a bound codevector is similar not only to another one with similar component codevectors (as in other schemes) but also to the component codevectors themselves. This allows the similarity of structures to be estimated by the overlap of their codevectors, without retrieval of the component codevectors. This also allows easy retrieval of the component codevectors. Examples of algorithmic and neural network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role filler and predicate arguments schemes, trees, and directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional artificial intelligence as well as to the localist and microfeature-based connectionist representations.