Retrieval evaluation with incomplete information
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Formal models for expert finding in enterprise corpora
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Voting for candidates: adapting data fusion techniques for an expert search task
CIKM '06 Proceedings of the 15th ACM international conference on Information and knowledge management
Hierarchical Language Models for Expert Finding in Enterprise Corpora
ICTAI '06 Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence
Using relevance feedback in expert search
ECIR'07 Proceedings of the 29th European conference on IR research
High quality expertise evidence for expert search
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Terrier information retrieval platform
ECIR'05 Proceedings of the 27th European conference on Advances in Information Retrieval Research
The influence of the document ranking in expert search
Proceedings of the 18th ACM conference on Information and knowledge management
High quality expertise evidence for expert search
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
The influence of the document ranking in expert search
Information Processing and Management: an International Journal
Foundations and Trends in Information Retrieval
Hi-index | 0.00 |
An expert search system assists users with their "expertise need" by suggesting people with relevant expertise to their query. Most systems work by ranking documents in response to the query, then ranking the candidates using information from this initial document ranking and known associations between documents and candidates. In this paper, we aim to determine whether we can approximate an evaluation of the expert search system using the underlying document ranking. We evaluate the accuracy of our document ranking evaluation by assessing how closely each measure correlates to the ground truth evaluation of the candidate ranking. Interestingly, we find that improving the underlying ranking of documents does not necessarily result in an improved candidate ranking.