A statistical approach to machine translation
Computational Linguistics
Mapping part-whole hierarchies into connectionist networks
Artificial Intelligence - On connectionist symbol processing
Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Artificial Intelligence - On connectionist symbol processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural network learning and expert systems
Neural network learning and expert systems
Class-based n-gram models of natural language
Computational Linguistics
Learned vector-space models for document retrieval
TREC-2 Proceedings of the second conference on Text retrieval conference
Self-Organizing Maps
Sparse Distributed Memory
Introduction to Modern Information Retrieval
Introduction to Modern Information Retrieval
Learning Distributed Representations of Concepts Using Linear Relational Embedding
IEEE Transactions on Knowledge and Data Engineering
Holographic Recurrent Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Holographic Reduced Representation: Distributed Representation for Cognitive Structures
Holographic Reduced Representation: Distributed Representation for Cognitive Structures
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
Linear recursive distributed representations
Neural Networks
A unified architecture for natural language processing: deep neural networks with multitask learning
Proceedings of the 25th international conference on Machine learning
Efficient Computation of Recursive Principal Component Analysis for Structured Input
ECML '07 Proceedings of the 18th European conference on Machine Learning
Graph self-organizing maps for cyclic and unbounded graphs
Neurocomputing
Vector Symbolic Architectures: A New Building Material for Artificial General Intelligence
Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference
Learning distributed representations for the classification of terms
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Distributional representations for handling sparsity in supervised sequence-labeling
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Phrase clustering for discriminative learning
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Compositional matrix-space models of language
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Natural Language Processing (Almost) from Scratch
The Journal of Machine Learning Research
Semi-supervised recursive autoencoders for predicting sentiment distributions
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
A general framework for adaptive processing of data structures
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Vector symbolic architectures VSAs are high-dimensional vector representations of objects e.g., words, image parts, relations e.g., sentence structures, and sequences for use with machine learning algorithms. They consist of a vector addition operator for representing a collection of unordered objects, a binding operator for associating groups of objects, and a methodology for encoding complex structures. We first develop constraints that machine learning imposes on VSAs; for example, similar structures must be represented by similar vectors. The constraints suggest that current VSAs should represent phrases "The smart Brazilian girl" by binding sums of terms, in addition to simply binding the terms directly. We show that matrix multiplication can be used as the binding operator for a VSA, and that matrix elements can be chosen at random. A consequence for living systems is that binding is mathematically possible without the need to specify, in advance, precise neuron-to-neuron connection properties for large numbers of synapses. A VSA that incorporates these ideas, Matrix Binding of Additive Terms MBAT, is described that satisfies all constraints. With respect to machine learning, for some types of problems appropriate VSA representations permit us to prove learnability rather than relying on simulations. We also propose dividing machine and neural learning and representation into three stages, with differing roles for learning in each stage. For neural modeling, we give representational reasons for nervous systems to have many recurrent connections, as well as for the importance of phrases in language processing. Sizing simulations and analyses suggest that VSAs in general, and MBAT in particular, are ready for real-world applications.