Recursive distributed representations
Artificial Intelligence - On connectionist symbol processing
Learning with Recurrent Neural Networks
Learning with Recurrent Neural Networks
The Journal of Machine Learning Research
A survey of kernels for structured data
ACM SIGKDD Explorations Newsletter
Recursive principal component analysis of graphs
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Exact solutions for recursive principal components analysis of sequences and trees
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Supervised neural networks for the classification of structures
IEEE Transactions on Neural Networks
A general framework for adaptive processing of data structures
IEEE Transactions on Neural Networks
PCA-Based Representations of Graphs for Prediction in QSAR Studies
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Representing objects, relations, and sequences
Neural Computation
Hi-index | 0.00 |
Recently, a successful extension of Principal Component Analysis for structured input, such as sequences, trees, and graphs, has been proposed. This allows the embedding of discrete structures into vectorial spaces, where all the classical pattern recognition and machine learning methods can be applied. The proposed approach is based on eigenanalysis of extended vectorial representations of the input structures and substructures. One problem with the approach is that eigenanalysis can be computationally quite demanding when considering large datasets of structured objects. In this paper we propose a general approach for reducing the computational burden. Experimental results show a significant speed-up of the computation.