C4.5: programs for machine learning
C4.5: programs for machine learning
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning - Special issue on learning with probabilistic representations
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Learning from Labeled and Unlabeled Data using Graph Mincuts
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Introduction to Data Mining, (First Edition)
Introduction to Data Mining, (First Edition)
Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap
Computational Statistics & Data Analysis
Multi-agent Random Walks for Local Clustering on Graphs
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
A nonparametric classification method based on K-associated graphs
Information Sciences: an International Journal
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Particle Competition and Cooperation in Networks for Semi-Supervised Learning
IEEE Transactions on Knowledge and Data Engineering
Using Katz Centrality to Classify Multiple Pattern Transformations
SBRN '12 Proceedings of the 2012 Brazilian Symposium on Neural Networks
Hi-index | 0.00 |
This paper presents a new network-based classification technique using limiting probabilities from random walk theory. Instead of using a traditional heuristic to classify data relying on physical features such as similarity or density distribution, it uses a concept called ease of access. By means of an underlying network, in which nodes represent states for the random walk process, unlabeled instances are classified with the label of the most easily reached class. The limiting probabilities are used as a measure for the ease of access by taking into account the biases provided by an unlabeled instance in a specific adjacency matrix weight composition. In this way, the technique allows data classification from a different viewpoint. Simulation results suggest that the proposed scheme is competitive with current and well-known classification algorithms.