Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Modern Information Retrieval
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
Survey of Text Mining
Minimizing manual annotation cost in supervised training from corpora
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Sample selection for statistical grammar induction
EMNLP '00 Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13
Large-scale text categorization by batch mode active learning
Proceedings of the 15th international conference on World Wide Web
Active learning with statistical models
Journal of Artificial Intelligence Research
Link prediction in complex networks based on cluster information
SBIA'12 Proceedings of the 21st Brazilian conference on Advances in Artificial Intelligence
Comparing relational and non-relational algorithms for clustering propositional data
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Hi-index | 0.00 |
In this paper, we present some preliminary results indicating that Complex Network properties may be useful to improve performance of Active Learning algorithms. In fact, centrality measures derived from networks generated from the data allow ranking the instances to find out the best ones to be presented to a human expert for manual classification. We discuss how to rank the instances based on the network vertex properties of closeness and betweenness. Such measures, used in isolation or combined, enable identifying regions in the data space that characterize prototypical or critical examples in terms of the classification task. Results obtained on different data sets indicate that, as compared to random selection of training instances, the approach reduces error rate and variance, as well as the number of instances required to reach representatives of all classes.