A tutorial on hidden Markov models and selected applications in speech recognition
Readings in speech recognition
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Training conditional random fields via gradient tree boosting
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Modelling error in query-by-humming applications
Modelling error in query-by-humming applications
Learning for efficient retrieval of structured data with noisy queries
Proceedings of the 24th international conference on Machine learning
Structured machine learning: the next ten years
Machine Learning
Relational Sequence Alignments and Logos
Inductive Logic Programming
User specific training of a music search engine
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Early prediction of temporal sequences based on information transfer
WAIM'11 Proceedings of the 12th international conference on Web-age information management
An Ensemble Architecture for Learning Complex Problem-Solving Techniques from Demonstration
ACM Transactions on Intelligent Systems and Technology (TIST)
Hi-index | 0.00 |
Sequence alignment is a common subtask in many applications such as genetic matching and music information retrieval. Crucial to the performance of any sequence alignment algorithm is an accurate model of the reward of transforming one sequence into another. Using this model, we can find the optimal alignment of two sequences or perform query-based selection from a database of target sequences with a dynamic programming approach. In this paper, we describe a new algorithm to learn the reward models from positive and negative examples of matching sequences. We develop a gradient boosting approach that reduces sequence learning to a series of standard function approximation problems that can be solved by any function approximator. A key advantage of this approach is that it is able to induce complex features using function approximation rather than relying on the user to predefine such features. Our experiments on synthetic data and a fairly complex real-world music retrieval domain demonstrate that our approach can achieve better accuracy and faster learning compared to a state-of-the-art structured SVM approach.