Elements of information theory
Elements of information theory
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Neural networks and the bias/variance dilemma
Neural Computation
Information-based objective functions for active data selection
Neural Computation
Selective Sampling Using the Query by Committee Algorithm
Machine Learning
A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Bayesian parameter estimation via variational methods
Statistics and Computing
Toward Optimal Active Learning through Sampling Estimation of Error Reduction
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Variational Relevance Vector Machines
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Employing EM and Pool-Based Active Learning for Text Classification
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Active learning with statistical models
Journal of Artificial Intelligence Research
Hidden Markov models for multiaspect target classification
IEEE Transactions on Signal Processing
Cost-sensitive feature acquisition and classification
Pattern Recognition
On the identification of intra-seasonal changes in the Indian summer monsoon
Proceedings of the Third International Workshop on Knowledge Discovery from Sensor Data
Modelling Stem Cells Lineages with Markov Trees
PRIB '09 Proceedings of the 4th IAPR International Conference on Pattern Recognition in Bioinformatics
Active learning from stream data using optimal weight classifier ensemble
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Active mining discriminative gene sets
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
A reservoir-driven non-stationary hidden Markov model
Pattern Recognition
Uncovering spatial topology represented by rat hippocampal population neuronal codes
Journal of Computational Neuroscience
Hi-index | 0.14 |
In this paper, we present a varitional Bayes (VB) framework for learning continuous hidden Markov models (CHMMs), and we examine the VB framework within active learning. Unlike a maximum likelihood or maximum a posteriori training procedure, which yield a point estimate of the CHMM parameters, VB-based training yields an estimate of the full posterior of the model parameters. This is particularly important for small training sets since it gives a measure of confidence in the accuracy of the learned model. This is utilized within the context of active learning, for which we acquire labels for those feature vectors for which knowledge of the associated label would be most informative for reducing model-parameter uncertainty. Three active learning algorithms are considered in this paper: 1) query by committee (QBC), with the goal of selecting data for labeling that minimize the classification variance, 2) a maximum expected information gain method that seeks to label data with the goal of reducing the entropy of the model parameters, and 3) an error-reduction-based procedure that attempts to minimize classification error over the test data. The experimental results are presented for synthetic and measured data. We demonstrate that all of these active learning methods can significantly reduce the amount of required labeling, compared to random selection of samples for labeling.