A sequential algorithm for training text classifiers
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Efficient noise-tolerant learning from statistical queries
Journal of the ACM (JACM)
Text Classification from Labeled and Unlabeled Documents using EM
Machine Learning - Special issue on information retrieval
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Machine Learning
Machine Learning
Machine Learning
UMEA: translating interaction histories into project contexts
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Active + Semi-supervised Learning = Robust Multi-View Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Support Vector Machine Active Learning with Application sto Text Classification
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Employing EM and Pool-Based Active Learning for Text Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Noise-tolerant learning, the parity problem, and the statistical query model
Journal of the ACM (JACM)
Learning and reasoning about interruption
Proceedings of the 5th international conference on Multimodal interfaces
Active Sampling for Class Probability Estimation and Ranking
Machine Learning
Automated email activity management: an unsupervised learning approach
Proceedings of the 10th international conference on Intelligent user interfaces
Automatically classifying emails into activities
Proceedings of the 11th international conference on Intelligent user interfaces
A hybrid learning system for recognizing user tasks from desktop activities and email messages
Proceedings of the 11th international conference on Intelligent user interfaces
Extracting knowledge about users' activities from raw workstation contents
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Active learning with statistical models
Journal of Artificial Intelligence Research
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
The lumière project: Bayesian user modeling for inferring the goals and needs of software users
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Experience sampling for building predictive user models: a comparative study
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Where are my intelligent assistant's mistakes? a systematic testing approach
IS-EUD'11 Proceedings of the Third international conference on End-user development
Hi-index | 0.00 |
Intelligent desktop environments allow the desktop user to define a set of projects or activities that characterize the user's desktop work. These environments then attempt to identify the current activity of the user in order to provide various kinds of assistance. These systems take a hybrid approach in which they allow the user to declare their current activity but they also employ learned classifiers to predict the current activity to cover those cases where the user forgets to declare the current activity. The classifiers must be trained on the very noisy data obtained from the user's activity declarations. Instead of asking the user to review and relabel the data manually, we employ an active EM algorithm that combines the EM algorithm and active learning. EM can be viewed as retraining on its own predictions. To make it more robust, we only retrain on those predictions that are made with high confidence. For active learning, we make a small number of queries to the user based on the most uncertain instances. Experimental results on real users show this active EM algorithm can significantly improve the prediction precision, and that it performs better than either EM or active learning alone.