Collaborative ranking: a case study on entity linking
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Applied Computational Intelligence and Soft Computing
CDDS: Constraint-driven document summarization models
Expert Systems with Applications: An International Journal
Spotting opinion spammers using behavioral footprints
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Extractive single-document summarization based on genetic operators and guided local search
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The authors address the problem of unsupervised ensemble ranking. Traditional approaches either combine multiple ranking criteria into a unified representation to obtain an overall ranking score or to utilize certain rank fusion or aggregation techniques to combine the ranking results. Beyond the aforementioned “combine-then-rank” and “rank-then-combine” approaches, the authors propose a novel “rank-learn-combine” ranking framework, called Interactive Ranking (iRANK), which allows two base rankers to “teach” each other before combination during the ranking process by providing their own ranking results as feedback to the others to boost the ranking performance. This mutual ranking refinement process continues until the two base rankers cannot learn from each other any more. The overall performance is improved by the enhancement of the base rankers through the mutual learning mechanism. The authors further design two ranking refinement strategies to efficiently and effectively use the feedback based on reasonable assumptions and rational analysis. Although iRANK is applicable to many applications, as a case study, they apply this framework to the sentence ranking problem in query-focused summarization and evaluate its effectiveness on the DUC 2005 and 2006 data sets. The results are encouraging with consistent and promising improvements. © 2010 Wiley Periodicals, Inc.