Exploiting generative models in discriminative classifiers
Proceedings of the 1998 conference on Advances in neural information processing systems II
Semiring frameworks and algorithms for shortest-distance problems
Journal of Automata, Languages and Combinatorics
Text classification using string kernels
The Journal of Machine Learning Research
Finite-state transducers in language and speech processing
Computational Linguistics
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Rational Kernels: Theory and Algorithms
The Journal of Machine Learning Research
Inference in Hidden Markov Models (Springer Series in Statistics)
Inference in Hidden Markov Models (Springer Series in Statistics)
Multi-stream Fusion for Speaker Classification
Speaker Classification I
Component-based discriminative classification for hidden Markov models
Pattern Recognition
Hi-index | 0.00 |
Many discriminative classification algorithms are designed for tasks where samples can be represented by fixed-length vectors. However, many examples in the fields of text processing, computational biology and speech recognition are best represented as variable-length sequences of vectors. Although several dynamic kernels have been proposed for mapping sequences of discrete observations into fixed-dimensional feature-spaces, few kernels exist for sequences of continuous observations. This paper introduces continuous rational kernels, an extension of standard rational kernels, as a general framework for classifying sequences of continuous observations. In addition to allowing new task-dependent kernels to be defined, continuous rational kernels allow existing continuous dynamic kernels, such as Fisher and generative kernels, to be calculated using standard weighted finite-state transducer algorithms. Preliminary results on both a large vocabulary continuous speech recognition (LVCSR) task and the TIMIT database are presented.