A hybrid user model for news story classification
UM '99 Proceedings of the seventh international conference on User modeling
An Information-Theoretic Definition of Similarity
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Fast Algorithms for Mining Association Rules in Large Databases
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
Performance evaluation of user modeling servers under real-world workload condition
UM'03 Proceedings of the 9th international conference on User modeling
Evaluating the inference mechanism of adaptive learning systems
UM'03 Proceedings of the 9th international conference on User modeling
The continuous empirical evaluation approach: evaluating adaptive web-based courses
UM'03 Proceedings of the 9th international conference on User modeling
Hi-index | 0.00 |
This paper proposes a novel method for assessing the performance of any Web recommendation function (ie user model), M, used in a Web recommender sytem, based on an off-line computation using labeled session data. Each labeled session consists of a sequence of Web pages followed by a page p$^{\rm ({\it IC})}$ that contains information the user claims is relevant. We then apply M to produce a corresponding suggested page p$^{\rm ({\it S})}$. In general, we say that M is good if p$^{\rm ({\it S})}$ has content “similar” to the associated p$^{\rm ({\it IC})}$, based on the the same session. This paper defines a number of functions for estimating this p$^{\rm ({\it S})}$ to p$^{\rm ({\it IC})}$ similarity that can be used to evaluate any new models off-line, and provides empirical data to demonstrate that evaluations based on these similarity functions match our intuitions.