Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
A Sparse Object Category Model for Efficient Learning and Exhaustive Recognition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Universal Approximation Capability of Cascade Correlation for Structures
Neural Computation
Introduction: Special issue on neural networks and kernel methods for structured domains
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Object recognition using multiresolution trees
SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Supervised neural networks for the classification of structures
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Learning from structured data (i.e. graphs) is a topic that has recently received the attention of the machine learning community, which proposed connectionist models such as recursive neural nets (RNN) and graph neural nets (GNN). In spite of their sound theoretical properties, RNNs and GNNs suffer some drawbacks that may limit their application. This paper outlines an alternative connectionist framework for learning discriminant functions over structured data. The approach, albeit preliminary, is simple and suitable to maximum-a-posteriori classification of broad families of graphs, and overcomes some limitations of RNNs and GNNs. The idea is to describe a graph as an algebraic relation, i.e. as a subset of the Cartesian product. The class-posterior probabilities given the relation are reduced to products of probabilistic quantities estimated using a multilayer perceptron. Experimental comparisons on tasks that were previously solved via RNNs and GNNs validate the approach.