Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Enhancing Supervised Learning with Unlabeled Data
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Temporal granulation and its application to signal analysis
Information Sciences—Informatics and Computer Science: An International Journal
On the need for time series data mining benchmarks: a survey and empirical demonstration
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
A symbolic representation of time series, with implications for streaming algorithms
DMKD '03 Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery
New Time Series Data Representation ESAX for Financial Applications
ICDEW '06 Proceedings of the 22nd International Conference on Data Engineering Workshops
Semi-supervised time series classification
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
On multi-view active learning and the combination with semi-supervised learning
Proceedings of the 25th international conference on Machine learning
A multi-view approach to semi-supervised document classification with incremental Naive Bayes
Computers & Mathematics with Applications
Hi-index | 0.09 |
We present a semi-supervised time series classification method based on co-training which uses the hidden Markov model (HMM) and one nearest neighbor (1-NN) as two learners. For modeling time series effectively, the symbolization of time series is required and a new granulation-based symbolic representation method is proposed in this paper. First, a granule for each segment of time series is constructed, and then the segments are clustered by spectral clustering applied to the formed similarity matrix. Using four time series datasets from UCR Time Series Data Mining Archive, the experimental results show that proposed symbolic representation works successfully for HMM. Compared with the supervised method, the semi-supervised method can construct accurate classifiers with very little labeled data available.