Item-based collaborative filtering recommendation algorithms
Proceedings of the 10th international conference on World Wide Web
Evaluation of Item-Based Top-N Recommendation Algorithms
Proceedings of the tenth international conference on Information and knowledge management
Item-based top-N recommendation algorithms
ACM Transactions on Information Systems (TOIS)
Video suggestion and discovery for youtube: taking random walks through the view graph
Proceedings of the 17th international conference on World Wide Web
Collaborative Filtering for Implicit Feedback Datasets
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
A survey of collaborative filtering techniques
Advances in Artificial Intelligence
A user-item relevance model for log-based collaborative filtering
ECIR'06 Proceedings of the 28th European conference on Advances in Information Retrieval
The million song dataset challenge
Proceedings of the 21st international conference companion on World Wide Web
Top-N recommendations from implicit feedback leveraging linked open data
Proceedings of the 7th ACM conference on Recommender systems
Hi-index | 0.00 |
We present a simple and scalable algorithm for top-N recommendation able to deal with very large datasets and (binary rated) implicit feedback. We focus on memory-based collaborative filtering algorithms similar to the well known neighboor based technique for explicit feedback. The major difference, that makes the algorithm particularly scalable, is that it uses positive feedback only and no explicit computation of the complete (user-by-user or item-by-item) similarity matrix needs to be performed. The study of the proposed algorithm has been conducted on data from the Million Songs Dataset (MSD) challenge whose task was to suggest a set of songs (out of more than 380k available songs) to more than 100k users given half of the user listening history and complete listening history of other 1 million people. In particular, we investigate on the entire recommendation pipeline, starting from the definition of suitable similarity and scoring functions and suggestions on how to aggregate multiple ranking strategies to define the overall recommendation. The technique we are proposing extends and improves the one that already won the MSD challenge last year.