Learning probabilistic automata with variable memory length
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
The potential and actual effectiveness of interactive query expansion
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Item-based collaborative filtering recommendation algorithms
Proceedings of the 10th international conference on World Wide Web
Probabilistic Memory-Based Collaborative Filtering
IEEE Transactions on Knowledge and Data Engineering
Adaptive web search based on user profile constructed without any effort from users
Proceedings of the 13th international conference on World Wide Web
Web Query Recommendation via Sequential Query Prediction
ICDE '09 Proceedings of the 2009 IEEE International Conference on Data Engineering
Collaborative filtering with temporal dynamics
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Predicting user interests from contextual information
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Factorizing personalized Markov chains for next-basket recommendation
Proceedings of the 19th international conference on World wide web
Context-sensitive ranking for document retrieval
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Fast context-aware recommendations with factorization machines
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Hi-index | 0.00 |
We propose a context-aware method for document recommendation. The idea is to model the historic sequential access data using a Variable Memory Markov (VMM) Model offline, and when online, make recommendation by searching a Predict Suffix Tree (PST). We implement a disk-based PST. Document recommendation is more challenging than web query recommendation due to the sparsity problem caused by larger state space. In the paper, we tackle the problem by (1) pruning in the modeling phase and (2) smoothing in the recommendation phase. Empirical evidence shows that our method can reduce the model complexity significantly and achieve good performance in recommendation accuracy.