On-line learning and stochastic approximations
On-line learning in neural networks
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Content-based book recommending using learning for text categorization
DL '00 Proceedings of the fifth ACM conference on Digital libraries
Item-based top-N recommendation algorithms
ACM Transactions on Information Systems (TOIS)
Factorization meets the neighborhood: a multifaceted collaborative filtering model
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Content-based recommendation systems
The adaptive web
BPR: Bayesian personalized ranking from implicit feedback
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Performance of recommender algorithms on top-n recommendation tasks
Proceedings of the fourth ACM conference on Recommender systems
Large-scale matrix factorization with distributed stochastic gradient descent
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
SLIM: Sparse Linear Methods for Top-N Recommender Systems
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
Hi-index | 0.00 |
The effectiveness of existing top-N recommendation methods decreases as the sparsity of the datasets increases. To alleviate this problem, we present an item-based method for generating top-N recommendations that learns the item-item similarity matrix as the product of two low dimensional latent factor matrices. These matrices are learned using a structural equation modeling approach, wherein the value being estimated is not used for its own estimation. A comprehensive set of experiments on multiple datasets at three different sparsity levels indicate that the proposed methods can handle sparse datasets effectively and outperforms other state-of-the-art top-N recommendation methods. The experimental results also show that the relative performance gains compared to competing methods increase as the data gets sparser.