C4.5: programs for machine learning
C4.5: programs for machine learning
The nature of statistical learning theory
The nature of statistical learning theory
Properties of graphs preserved by relational graph rewritings
Information Sciences: an International Journal - Relational methods in computer science
Machine Learning
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Learning relational probability trees
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Why collective inference improves relational classification
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
A Survey of Outlier Detection Methodologies
Artificial Intelligence Review
Data Mining: Concepts and Techniques
Data Mining: Concepts and Techniques
Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Classification in Networked Data: A Toolkit and a Univariate Case Study
The Journal of Machine Learning Research
A tutorial on spectral clustering
Statistics and Computing
Graph Laplacians and their Convergence on Random Neighborhood Graphs
The Journal of Machine Learning Research
Graph-Based Semisupervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Expert Systems with Applications: An International Journal
Label Propagation through Linear Neighborhoods
IEEE Transactions on Knowledge and Data Engineering
Neighborhood rough set based heterogeneous feature subset selection
Information Sciences: an International Journal
Semi-supervised Classification from Discriminative Random Walks
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Clustering high dimensional data: A graph-based relaxed optimization approach
Information Sciences: an International Journal
Types of arcs in a fuzzy graph
Information Sciences: an International Journal
Error bounds of multi-graph regularized semi-supervised classification
Information Sciences: an International Journal
Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap
Computational Statistics & Data Analysis
Linear Neighborhood Propagation and Its Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
5th Annual International Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services
Introduction to Algorithms, Third Edition
Introduction to Algorithms, Third Edition
Fast k most similar neighbor classifier for mixed data (tree k-MSN)
Pattern Recognition
A new separation measure for improving the effectiveness of validity indices
Information Sciences: an International Journal
Fast Approximate kNN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection
The Journal of Machine Learning Research
Cautious Collective Classification
The Journal of Machine Learning Research
The Journal of Machine Learning Research
A history of graph entropy measures
Information Sciences: an International Journal
Label-dependent feature extraction in social networks for node classification
SocInfo'10 Proceedings of the Second international conference on Social informatics
Discriminative probabilistic models for relational data
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
A coloring fuzzy graph approach for image classification
Information Sciences: an International Journal
Computer Science Review
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
A regularization framework in polar coordinates for transductive learning in networked data
Information Sciences: an International Journal
Link prediction in complex networks based on cluster information
SBIA'12 Proceedings of the 21st Brazilian conference on Advances in Artificial Intelligence
Information Sciences: an International Journal
Bias-Guided random walk for network-based data classification
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part II
A purity measure based transductive learning algorithm
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part II
Hi-index | 0.07 |
Graph is a powerful representation formalism that has been widely employed in machine learning and data mining. In this paper, we present a graph-based classification method, consisting of the construction of a special graph referred to as K-associated graph, which is capable of representing similarity relationships among data cases and proportion of classes overlapping. The main properties of the K-associated graphs as well as the classification algorithm are described. Experimental evaluation indicates that the proposed technique captures topological structure of the training data and leads to good results on classification task particularly for noisy data. In comparison to other well-known classification techniques, the proposed approach shows the following interesting features: (1) A new measure, called purity, is introduced not only to characterize the degree of overlap among classes in the input data set, but also to construct the K-associated optimal graph for classification; (2) nonlinear classification with automatic local adaptation according to the input data. Contrasting to K-nearest neighbor classifier, which uses a fixed K, the proposed algorithm is able to automatically consider different values of K, in order to best fit the corresponding overlap of classes in different data subspaces, revealing both the local and global structure of input data. (3) The proposed classification algorithm is nonparametric, implicating high efficiency and no need for model selection in practical applications.