The nature of statistical learning theory
The nature of statistical learning theory
A framework for structural risk minimisation
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Using the Fisher Kernel Method to Detect Remote Protein Homologies
Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology
Eighteenth national conference on Artificial intelligence
Real-time American Sign Language recognition from video using hidden Markov models
ISCV '95 Proceedings of the International Symposium on Computer Vision
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Semi-Supervised Self-Training of Object Detection Models
WACV-MOTION '05 Proceedings of the Seventh IEEE Workshops on Application of Computer Vision (WACV/MOTION'05) - Volume 1 - Volume 01
Probabilistic Kernels for the Classification of Auto-Regressive Visual Processes
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Discriminative versus generative parameter and structure learning of Bayesian network classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
Expectation maximization algorithms for conditional likelihoods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning subjective nouns using extraction pattern bootstrapping
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Semi-supervised time series classification
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Discriminative Learning of Mixture of Bayesian Network Classifiers for Sequence Classification
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Hidden Conditional Random Fields for Gesture Recognition
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Hidden Conditional Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators
Proceedings of the 25th international conference on Machine learning
Discriminative Learning for Dynamic State Prediction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to Semi-Supervised Learning
Introduction to Semi-Supervised Learning
Performance analysis of time-distance gait parameters under different speeds
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
An agent-based approach to care in independent living
AmI'10 Proceedings of the First international joint conference on Ambient intelligence
Weighted dynamic time warping for time series classification
Pattern Recognition
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Annotating class labels of a large number of time-series data is generally an expensive task. We propose novel semi-supervised learning algorithms that can improve the classification accuracy significantly by exploiting a relatively larger amount of unlabeled data in conjunction with a few labeled samples. Our algorithms utilize the unlabeled data as regularizers for opting for classifiers with stronger certainty on the unlabeled data. For the state-of-the-art conditional probabilistic sequence model called the hidden conditional random field, we first suggest the entropy minimization algorithm that was previously applied for static classification setups. More sophisticated margin-based approaches are then introduced, motivated by the semi-supervised support vector machines originally aimed for non-sequential data. We provide effective ways to incorporate and minimize the hat loss function for sequence data via probabilistic treatment in a principled manner. We show the performance improvement achieved by our methods on several semi-supervised time-series data classification scenarios.