Probabilistic latent semantic indexing
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Word Topic Models for Spoken Document Retrieval and Transcription
ACM Transactions on Asian Language Information Processing (TALIP)
The ATR multilingual speech-to-speech translation system
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
In this paper, we propose a document topic model (DTM) which is based on the non-negative matrix factorization (NMF) approach, to explore Japanese spontaneous spoken document retrieval. Each document is interpreted as a generative topic model, belonging to many topics. The relevance of a document to a query is expressed by the probability of a query word being generated by the model. Different from the conventional vector space model where the matching between query and document is at the word level, the topic model complete its matching in the concept or semantic level. So, the problem of term mismatch in the information retrieval can be improved, that is, the relevant documents have possibilities to be retrieved even if the query words do not appear in them. The method also benefit the retrieval of spoken document containing "term misrecognitions", which is peculiar to the speech transcripts. By using this approach, experiments are conducted on a test collection of corpora of spontaneous Japanese (CSJ), where some of the evaluating queries and answer references are suited to retrieval in semantic level. The retrieval performance is improved by increasing the number of topics. When the topic number exceeds a threshold, the NMF's retrieval performance surpasses the tf-idf-based vector space model (VSM). Furthermore, compared to the VSM-based method, the NMF-based topic model also shows its strongpoint in dealing with term mismatch and term misrecognition.