Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Independent component analysis: algorithms and applications
Neural Networks
A Riemannian approach to graph embedding
Pattern Recognition
IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Graph Classification and Clustering Based on Vector Space Embedding
Graph Classification and Clustering Based on Vector Space Embedding
Graph of words embedding for molecular structure-activity relationship analysis
CIARP'10 Proceedings of the 15th Iberoamerican congress conference on Progress in pattern recognition, image analysis, computer vision, and applications
Multiple classifiers for graph of words embedding
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Feature selection on node statistics based embedding of graphs
Pattern Recognition Letters
Improving fuzzy multilevel graph embedding through feature selection technique
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Optimized dissimilarity space embedding for labeled graphs
Information Sciences: an International Journal
Hi-index | 0.00 |
The Graph of Words Embedding consists in mapping every graph of a given dataset to a feature vector by counting unary and binary relations between node attributes of the graph. While it shows good properties in classification problems, it suffers from high dimensionality and sparsity. These two issues are addressed in this article. Two well-known techniques for dimensionality reduction, kernel principal component analysis (kPCA) and independent component analysis (ICA), are applied to the embedded graphs. We discuss their performance compared to the classification of the original vectors on three different public databases of graphs.