Learning in the presence of concept drift and hidden contexts
Machine Learning
Incremental relevance feedback for information filtering
SIGIR '96 Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval
Combining collaborative filtering with personal agents for better recommendations
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
A hybrid user model for news story classification
UM '99 Proceedings of the seventh international conference on User modeling
Learning user interest dynamics with a three-descriptor representation
Journal of the American Society for Information Science and Technology
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Collaborative filtering via gaussian probabilistic latent semantic analysis
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
WI '04 Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence
A Community-Based Recommendation System to Reveal Unexpected Interests
MMM '05 Proceedings of the 11th International Multimedia Modelling Conference
Efficient distribution-free learning of probabilistic concepts
SFCS '90 Proceedings of the 31st Annual Symposium on Foundations of Computer Science
Hi-index | 0.00 |
Incremental approaches learn drifting user interests mainly from user feedbacks. Most of those existing approaches assume that data instances in user feedbacks are binary labeled. This paper presents a novel probabilistic approach that learns drifting user interests from numerically labeled feedbacks instead of binary labeled ones. The approach models user interests as a set of probabilistic concepts, considers numerical instance labels as probabilities that the user likes those instances, and uses feedbacks to update user interest models incrementally based on an exponential, recency-weighted average algorithm. Experimental results on different learning tasks show that the approach outperforms existing approaches in numerically labeled feedback environment.