A view of the EM algorithm that justifies incremental, sparse, and other variants
Proceedings of the NATO Advanced Study Institute on Learning in graphical models
Keyword query cleaning using hidden Markov models
Proceedings of the First International Workshop on Keyword Search on Structured Data
Keyword Search in Databases
A framework for evaluating database keyword search strategies
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Keymantic: semantic keyword-based searching in data integration systems
Proceedings of the VLDB Endowment
Facet discovery for structured web search: a query-log mining approach
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Keyword search over relational databases: a metadata approach
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
A Hidden markov model approach to keyword-based search over relational databases
ER'11 Proceedings of the 30th international conference on Conceptual modeling
QUEST: a keyword search system for relational data based on semantic and machine learning techniques
Proceedings of the VLDB Endowment
Hi-index | 0.01 |
Hidden Markov Models (HMMs) are today employed in a variety of applications, ranging from speech recognition to bioinformatics. In this paper, we present the List Viterbi training algorithm, a version of the Expectation-Maximization (EM) algorithm based on the List Viterbi algorithm instead of the commonly used forward-backward algorithm. We developed the batch and online versions of the algorithm, and we also describe an interesting application in the context of keyword search over databases, where we exploit a HMM for matching keywords into database terms. In our experiments we tested the online version of the training algorithm in a semi-supervised setting that allows us to take into account the feedbacks provided by the users.