Communications of the ACM
Numerical recipes: the art of scientific computing
Numerical recipes: the art of scientific computing
High order correlation model for associative memory
AIP Conference Proceedings 151 on Neural Networks for Computing
Multilayer feedforward networks are universal approximators
Neural Networks
Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Some new results on neural network approximation
Neural Networks
On the computational power of neural nets
Journal of Computer and System Sciences
The canonical form of nonlinear discrete-time models
Neural Computation
Exploiting generative models in discriminative classifiers
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning with Recurrent Neural Networks
Learning with Recurrent Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
How to be a gray box: dynamic semi-physical modeling
Neural Networks
Application of Cascade Correlation Networks for Structures toChemistry
Applied Intelligence
Text classification using string kernels
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Extensions of marginalized graph kernels
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Pattern Recognition Letters - Special issue: Artificial neural networks in pattern recognition
Universal Approximation Capability of Cascade Correlation for Structures
Neural Computation
Neural Networks
Recurrent networks for structured data - A unifying approach and its properties
Cognitive Systems Research
UC'06 Proceedings of the 5th international conference on Unconventional Computation
Hi-index | 5.24 |
The present paper is a short survey of the development of numerical learning from structured data, an old problem that was first addressed by the end of the years 1980, and has recently undergone exciting developments, both from a theoretical point of view and for applications. Traditionally, numerical machine learning deals with unstructured data, in the form of vectors: neural networks, graphical models, support vector machines, handle vectors of features that are assumed to be relevant for solving the problem at hand (classification or regression). It is often the case, however, that data is structured, i.e. is in the form of graphs; three examples will be described here: prediction of the properties of molecules, image analysis, and natural language processing. The traditional approach consists in handcrafting a vector representation of the structured data (features describing the molecules, "bag of words" for language processing), and subsequently training a machine to perform the task from that representation. By contrast, we describe here a family of approaches (RAAMs, LRAAMs, recursive or folding networks, graph machines) that are specifically designed to learn from structured data. We show that, despite the apparent diversity, two basic principles underlie the recent approaches: first, use structured machines to learn structured data; second, learn representations instead of handcrafting them; although neither principle is really new, they proved very successful for handling structured data, to the point of generating a novel branch of numerical machine learning.