IEEE Transactions on Pattern Analysis and Machine Intelligence
Proceedings of the 6th international conference on Intelligent user interfaces
R-trees: a dynamic index structure for spatial searching
SIGMOD '84 Proceedings of the 1984 ACM SIGMOD international conference on Management of data
The Use of Implicit Evidence for Relevance Feedback in Web Retrieval
Proceedings of the 24th BCS-IRSG European Colloquium on IR Research: Advances in Information Retrieval
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Implicit feedback for inferring user preference: a bibliography
ACM SIGIR Forum
Query chains: learning to rank from implicit feedback
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
A support vector method for multivariate performance measures
ICML '05 Proceedings of the 22nd international conference on Machine learning
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Online video recommendation based on multimodal fusion and relevance feedback
Proceedings of the 6th ACM international conference on Image and video retrieval
Evaluating the implicit feedback models for adaptive video retrieval
Proceedings of the international workshop on Workshop on multimedia information retrieval
Search trails using user feedback to improve video search
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Use of implicit graph for recommending relevant videos: a simulated evaluation
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
A framework for query refinement with user feedback
Journal of Systems and Software
Hi-index | 0.00 |
This paper describes an approach to optimize query by visual example results, by combining visual features and implicit user feedback in interactive video retrieval. To this end, we propose a framework, in which video processing is performed by employing well established techniques, while implicit user feedback analysis is realized with a graph based approach that processes the user actions and navigation patterns during a search session, in order to initiate semantic relations between the video segments. To combine the visual and implicit feedback information, we train a support vector machine classifier with positive and negative examples generated from the graph structured past user interaction data. Then, the classifier reranks the results of visual search that were initially based on visual features. This framework is embedded in an interactive video search engine and evaluated by conducting a user experiment in two phases: first, we record the user actions during typical retrieval sessions and then, we evaluate the reranking of the results of visual query by example. The evaluation and the results demonstrate that the proposed approach provides an improved ranking in most of the evaluated queries.