Communications of the ACM
Collecting user access patterns for building user profiles and collaborative filtering
IUI '99 Proceedings of the 4th international conference on Intelligent user interfaces
An algorithmic framework for performing collaborative filtering
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Mining navigation history for recommendation
Proceedings of the 5th international conference on Intelligent user interfaces
Automatic personalization based on Web usage mining
Communications of the ACM
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Hybrid Recommender Systems: Survey and Experiments
User Modeling and User-Adapted Interaction
Knowledge discovery from users Web-page navigation
RIDE '97 Proceedings of the 7th International Workshop on Research Issues in Data Engineering (RIDE '97) High Performance Database Management for Large-Scale Applications
Web usage mining: discovery and applications of usage patterns from Web data
ACM SIGKDD Explorations Newsletter
WhatNext: A Prediction System for Web Requests using N-gram Sequence Models
WISE '00 Proceedings of the First International Conference on Web Information Systems Engineering (WISE'00)-Volume 1 - Volume 1
Item-based top-N recommendation algorithms
ACM Transactions on Information Systems (TOIS)
Characterizing Web Usage Regularities with Information Foraging Agents
IEEE Transactions on Knowledge and Data Engineering
Reinforcement Learning Architecture for Web Recommendations
ITCC '04 Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'04) Volume 2 - Volume 2
An MDP-Based Recommender System
The Journal of Machine Learning Research
E-commerce intelligent agent: personalization travel support agent using Q Learning
ICEC '05 Proceedings of the 7th international conference on Electronic commerce
A hybrid web recommender system based on Q-learning
Proceedings of the 2008 ACM symposium on Applied computing
Adapting the interaction state model in conversational recommender systems
Proceedings of the 10th international conference on Electronic commerce
Improving recommender systems with adaptive conversational strategies
Proceedings of the 20th ACM conference on Hypertext and hypermedia
MUADDIB: A distributed recommender system supporting device adaptivity
ACM Transactions on Information Systems (TOIS)
Dependable Recommendations in Social Internetworking
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Knowledge network for English grammar learning support system
International Journal of Advanced Intelligence Paradigms
Hi-index | 0.00 |
Information overload is no longer news; the explosive growth of the Internet has made this issue increasingly serious for Web users. Users are very often overwhelmed by the huge amount of information and are faced with a big challenge to find the most relevant information in the right time. Recommender systems aim at pruning this information space and directing users toward the items that best meet their needs and interests. Web Recommendation has been an active application area in Web Mining and Machine Learning research. In this paper we propose a novel machine learning perspective toward the problem, based on reinforcement learning. Unlike other recommender systems, our system does not use the static patterns discovered from web usage data, instead it learns to make recommendations as the actions it performs in each situation. We model the problem as Q-Learning while employing concepts and techniques commonly applied in the web usage mining domain. We propose that the reinforcement learning paradigm provides an appropriate model for the recommendation problem, as well as a framework in which the system constantly interacts with the user and learns from her behavior. Our experimental evaluations support our claims and demonstrate how this approach can improve the quality of web recommendations.