Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Enhancing Supervised Learning with Unlabeled Data
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
ITSPOKE: an intelligent tutoring spoken dialogue system
HLT-NAACL--Demonstrations '04 Demonstration Papers at HLT-NAACL 2004
Predicting student emotions in computer-human tutoring dialogues
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Semi-supervised fuzzy clustering: A kernel-based approach
Knowledge-Based Systems
A semi-supervised learning method for motility disease diagnostic
CIARP'07 Proceedings of the Congress on pattern recognition 12th Iberoamerican conference on Progress in pattern recognition, image analysis and applications
Co-training with relevant random subspaces
Neurocomputing
Darwin phones: the evolution of sensing and inference on mobile phones
Proceedings of the 8th international conference on Mobile systems, applications, and services
A semi-supervised approach for reject inference in credit scoring using SVMs
ICDM'10 Proceedings of the 10th industrial conference on Advances in data mining: applications and theoretical aspects
Semi-supervised genetic programming for classification
Proceedings of the 13th annual conference on Genetic and evolutionary computation
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
Hi-index | 0.00 |
Natural Language Processing applications often require large amounts of annotated training data, which are expensive to obtain. In this paper we investigate the applicability of Co-training to train classifiers that predict emotions in spoken dialogues. In order to do so, we have first applied the wrapper approach with Forward Selection and Naïve Bayes, to reduce the dimensionality of our feature set. Our results show that Co-training can be highly effective when a good set of features are chosen.