The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
A study of smoothing methods for language models applied to Ad Hoc information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient boosting algorithm for combining preferences
The Journal of Machine Learning Research
Optimizing web search using web click-through data
Proceedings of the thirteenth ACM international conference on Information and knowledge management
Hierarchical Clustering Algorithms for Document Datasets
Data Mining and Knowledge Discovery
Query chains: learning to rank from implicit feedback
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Shuffling a stacked deck: the case for partially randomized ranking of search engine results
VLDB '05 Proceedings of the 31st international conference on Very large data bases
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Improving web search ranking by incorporating user behavior information
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Adapting ranking SVM to document retrieval
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Active exploration for learning rankings from clickthrough data
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Clustering people according to their preference criteria
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
As the Web continuously grows, the results returned by search engines are too many to review. Lately, the problem of personalizing the ranked result list based on user feedback has gained a lot of attention. Such approaches usually require a big amount of user feedback on the results, which is used as training data. In this work, we present a method that overcomes this issue by exploiting all search results, both rated and unrated, in order to train a ranking function. Given a small initial set of user feedback for some search results, we first perform clustering on all results returned by the search. Based on the clusters created, we extend the initial set of rated results, including new, unrated results. Then, we use a popular training method (Ranking SVM) to train a ranking function using the expanded set of results. The experiments show that our method approximates sufficiently the results of an "ideal" system where all results of each query should be rated in order to be used as training data, something that is not feasible in a real-world scenario.