Model Fusion in Conceptual Language Modeling
ECIR '09 Proceedings of the 31th European Conference on IR Research on Advances in Information Retrieval
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
MRIM-LIG at ImageCLEF 2009: robotvision, image annotation and retrieval tasks
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
Hi-index | 0.00 |
The main idea in this paper is to incorporate medical knowledge in the language modeling approach to information retrieval (IR). Our model makes use of the textual part of ImageCLEFmed corpus and of the medical knowledge as found in the Unified Medical Language System (UMLS) knowledge sources. The use of UMLS allows us to create a conceptual representation of each sentence in the corpus. We use these representations to create a graph model for each document. As in the standard language modeling approach, we evaluate the probability that a document graph model generates the query graph. Graphs are created from medical texts and queries, and are built for different languages, with different methods. After developing the graph model, we present our tests, which involve mixing different concepts sources (i.e. languages and methods) for the matching of the query and text graphs. Results show that using language model on concepts provides good results in IR. Multiplying the concept sources further improves the results. Lastly, using relations between concepts (provided by the graphs under consideration) improves results when only few conceptual sources are used to analyze the query.