BoltzCONS: dynamic symbol structures in a connectionist network
Artificial Intelligence - On connectionist symbol processing
Mapping part-whole hierarchies into connectionist networks
Artificial Intelligence - On connectionist symbol processing
Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Artificial Intelligence - On connectionist symbol processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Connectionist and symbolic representations
The handbook of brain theory and neural networks
Sparse coding in the primate cortex
The handbook of brain theory and neural networks
Sparse Distributed Memory
Representation and Processing of Structures with Binary Sparse Distributed Codes
IEEE Transactions on Knowledge and Data Engineering
Binary Spatter-Coding of Ordered K-Tuples
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
Distributed representations and nested compositional structure
Distributed representations and nested compositional structure
Neural representations: Some old problems revisited
Journal of Cognitive Neuroscience
Supervised neural networks for the classification of structures
IEEE Transactions on Neural Networks
Holographic reduced representations
IEEE Transactions on Neural Networks
Representation and Processing of Structures with Binary Sparse Distributed Codes
IEEE Transactions on Knowledge and Data Engineering
Representation and extrapolation in multilayer perceptrons
Neural Computation
A Binding Procedure for Distributed Binary Data Representations
Cybernetics and Systems Analysis
Images, Frames, and Connectionist Hierarchies
Neural Computation
The problem of rapid variable creation
Neural Computation
Counting objects with biologically inspired regulatory-feedback networks
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Using non-oscillatory dynamics to disambiguate simultaneous patterns
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Memory capacities for synaptic and structural plasticity
Neural Computation
Optimal matrix compression yields storage capacity 1 for binary Willshaw associative memory
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Evaluating performance of random subspace classifier on ELENA classification database
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Evolving distributed representations for language with self-organizing maps
EELC'06 Proceedings of the Third international conference on Emergence and Evolution of Linguistic Communication: symbol Grounding and Beyond
Similarity-Based Retrieval With Structure-Sensitive Sparse Binary Distributed Representations
Computational Intelligence
Learning the systematic transformation of holographic reduced representations
Cognitive Systems Research
Representing objects, relations, and sequences
Neural Computation
Hi-index | 0.00 |
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's holographic reduced representations and Kanerva's binary spatter codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this article we consider procedures of the context-dependent thinning developed for representation of complex hierarchical items in the architecture of associative-projective neural networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, context-dependent thinning preserves the same low density (or sparseness) of the bound codevector for a varied number of component codevectors. Besides, a bound codevector is similar not only to another one with similar component codevectors (as in other schemes) but also to the component codevectors themselves. This allows the similarity of structures to be estimated by the overlap of their codevectors, without retrieval of the component codevectors. This also allows easy retrieval of the component codevectors. Examples of algorithmic and neural network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role filler and predicate arguments schemes, trees, and directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional artificial intelligence as well as to the localist and microfeature-based connectionist representations.