A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs
SIAM Journal on Scientific Computing
Algorithm Design
Three new graphical models for statistical language modelling
Proceedings of the 24th international conference on Machine learning
Euclidean Embedding of Co-occurrence Data
The Journal of Machine Learning Research
Learning multiple graphs for document recommendations
Proceedings of the 17th international conference on World Wide Web
Factorizing personalized Markov chains for next-basket recommendation
Proceedings of the 19th international conference on World wide web
BPR: Bayesian personalized ranking from implicit feedback
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Collaborative filtering via euclidean embedding
Proceedings of the fourth ACM conference on Recommender systems
Optimal distributed online prediction using mini-batches
The Journal of Machine Learning Research
Playlist prediction via metric embedding
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Improving word representations via global context and multiple word prototypes
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
Learning algorithms that embed objects into Euclidean space have become the methods of choice for a wide range of problems, ranging from recommendation and image search to playlist prediction and language modeling. Probabilistic embedding methods provide elegant approaches to these problems, but can be expensive to train and store as a large monolithic model. In this paper, we propose a method that trains not one monolithic model, but multiple local embeddings for a class of pairwise conditional models especially suited for sequence and co-occurrence modeling. We show that computation and memory for training these multi-space models can be efficiently parallelized over many nodes of a cluster. Focusing on sequence modeling for music playlists, we show that the method substantially speeds up training while maintaining high model quality.