A critical investigation of recall and precision as measures of retrieval system performance
ACM Transactions on Information Systems (TOIS)
Full text indexing based on lexical relations an application: software libraries
SIGIR '89 Proceedings of the 12th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
GroupLens: an open architecture for collaborative filtering of netnews
CSCW '94 Proceedings of the 1994 ACM conference on Computer supported cooperative work
Recommending and evaluating choices in a virtual community of use
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Combining classifiers in text categorization
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
Inferring Web communities from link topology
Proceedings of the ninth ACM conference on Hypertext and hypermedia : links, objects, time and space---structure in hypermedia systems: links, objects, time and space---structure in hypermedia systems
The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Context-sensitive learning methods for text categorization
ACM Transactions on Information Systems (TOIS)
Authoritative sources in a hyperlinked environment
Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms
An Evaluation of Statistical Approaches to Text Categorization
Information Retrieval
Information Retrieval
CIA '00 Proceedings of the 4th International Workshop on Cooperative Information Agents IV, The Future of Information Agents in Cyberspace
Machine learning in automated text categorisation
Machine learning in automated text categorisation
Active learning with committees for text categorization
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Hi-index | 0.00 |
In spite of the wide use of the Internet, it is difficult to develop desirable web documents evaluation that reflects users' needs. Many automatic ranking systems have used this citation system to measure the relative importance of consumer products or documents. However, the automatic citation analysis has a limitation in that it does not truly reflect the importance of the varying viewpoints of human evaluation. Therefore, human evaluations of web documents are very helpful in finding relevant information in a specific domain. Currently, human evaluation is done by a single expert or general users without considering the degree of domain knowledge of evaluators. In this paper, we suggest that a dynamic group of experts for a certain web document be automatically created among users to evaluate domain specific web documents. The experts have dynamic authority weights depending on their performance of the ranking evaluation. In addition, we develop an evaluation effectiveness measure for ranking processes. This evaluation by a group of experts provides more accurate search results and can be a good measure of user preferences when the size of users' feedback is small. Also, dynamic change of authority weight provides the evaluation effectiveness of experts. Furthermore, dynamic change of authority weight provides the evaluation effectiveness of experts.