Floating search methods in feature selection
Pattern Recognition Letters
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Feature Construction with Version Spaces for Biochemical Applications
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
An Apriori-Based Algorithm for Mining Frequent Substructures from Graph Data
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Pattern Vectors from Algebraic Graph Theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Graph-Theoretic Techniques for Web Content Mining
Graph-Theoretic Techniques for Web Content Mining
2005 Speical Issue: Graph kernels for chemical informatics
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Object Recognition as Many-to-Many Feature Matching
International Journal of Computer Vision
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
A Riemannian approach to graph embedding
Pattern Recognition
Prototype selection for dissimilarity-based classifiers
Pattern Recognition
International Journal of Computer Vision
IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Approximate graph edit distance computation by means of bipartite graph matching
Image and Vision Computing
Graph edit distance with node splitting and merging, and its application to diatom identification
GbRPR'03 Proceedings of the 4th IAPR international conference on Graph based representations in pattern recognition
Graph Classification and Clustering Based on Vector Space Embedding
Graph Classification and Clustering Based on Vector Space Embedding
The feature selection problem: traditional methods and a new algorithm
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
Graph of words embedding for molecular structure-activity relationship analysis
CIARP'10 Proceedings of the 15th Iberoamerican congress conference on Progress in pattern recognition, image analysis, computer vision, and applications
Efficient many-to-many feature matching under the l1 norm
Computer Vision and Image Understanding
Dimensionality reduction for graph of words embedding
GbRPR'11 Proceedings of the 8th international conference on Graph-based representations in pattern recognition
Vocabulary selection for graph of words embedding
IbPRIA'11 Proceedings of the 5th Iberian conference on Pattern recognition and image analysis
Report on the second symbol recognition contest
GREC'05 Proceedings of the 6th international conference on Graphics Recognition: ten Years Review and Future Perspectives
Graph embedding in vector spaces by node attribute statistics
Pattern Recognition
Graph Characterization via Ihara Coefficients
IEEE Transactions on Neural Networks
Improving fuzzy multilevel graph embedding through feature selection technique
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Hi-index | 0.10 |
Representing a graph with a feature vector is a common way of making statistical machine learning algorithms applicable to the domain of graphs. Such a transition from graphs to vectors is known as graph embedding. A key issue in graph embedding is to select a proper set of features in order to make the vectorial representation of graphs as strong and discriminative as possible. In this article, we propose features that are constructed out of frequencies of node label representatives. We first build a large set of features and then select the most discriminative ones according to different ranking criteria and feature transformation algorithms. On different classification tasks, we experimentally show that only a small significant subset of these features is needed to achieve the same classification rates as competing to state-of-the-art methods.