A re-examination of text categorization methods
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
An attention-based decision fusion scheme for multimedia information retrieval
PCM'04 Proceedings of the 5th Pacific Rim Conference on Advances in Multimedia Information Processing - Volume Part II
Relevance feedback: a power tool for interactive content-based image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Optimization-based automated home video editing system
IEEE Transactions on Circuits and Systems for Video Technology
Proceedings of the international conference on Multimedia
Contextual Video Recommendation by Multimodal Relevance and User Feedback
ACM Transactions on Information Systems (TOIS)
Effects of Usage-Based Feedback on Video Retrieval: A Simulation-Based Study
ACM Transactions on Information Systems (TOIS)
An online video recommendation framework using rich information
Proceedings of the Third International Conference on Internet Multimedia Computing and Service
Integrating rich information for video recommendation with multi-task rank aggregation
MM '11 Proceedings of the 19th ACM international conference on Multimedia
On video recommendation over social network
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Personalized video recommendation based on viewing history with the study on YouTube
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.