HMM Based On-Line Handwriting Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Topic segmentation with an aspect hidden Markov model
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Spoken Language Processing: A Guide to Theory, Algorithm, and System Development
Spoken Language Processing: A Guide to Theory, Algorithm, and System Development
Relation between PLSA and NMF and implications
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Novel estimation methods for unsupervised discovery of latent structure in natural language text
Novel estimation methods for unsupervised discovery of latent structure in natural language text
Algorithms for sparse nonnegative tucker decompositions
Neural Computation
ACORNS - towards computational modeling of communication and recognition skills
COGINF '07 Proceedings of the 6th IEEE International Conference on Cognitive Informatics
Unsupervised learning of acoustic sub-word units
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
Speech retrieval in unknown languages: a pilot study
CLIAWS3 '09 Proceedings of the Third International Workshop on Cross Lingual Information Access: Addressing the Information Need of Multilingual Societies
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
Unsupervised Pattern Discovery in Speech
IEEE Transactions on Audio, Speech, and Language Processing
A spectral algorithm for learning Hidden Markov Models
Journal of Computer and System Sciences
Learning Hidden Markov Models Using Nonnegative Matrix Factorization
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Non-negative Tucker decomposition (NTD) is applied to unsupervised training of discrete density HMMs for the discovery of sequential patterns in data, for segmenting sequential data into patterns and for recognition of the discovered patterns in unseen data. Structure constraints are imposed on the NTD such that it shares its parameters with the HMM. Two training schemes are proposed: one uses NTD as a regularizer for the Baum-Welch (BW) training of the HMM, the other alternates between initializing the NTD with the BW output and vice versa. On the task of unsupervised spoken pattern discovery from the TIDIGITS database, both training schemes are observed to improve over BW training in terms of pattern purity, accuracy of the segmentation boundaries and accuracy for speech recognition. Furthermore, we experimentally observe that the alternative training of NTD and BW outperforms the NTD regularized BW, BW training and BW training with simulated annealing.