Beyond accuracy: evaluating recommender systems by coverage and serendipity
Proceedings of the fourth ACM conference on Recommender systems
Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit
Proceedings of the fifth ACM conference on Recommender systems
CLOUDCOM '11 Proceedings of the 2011 IEEE Third International Conference on Cloud Computing Technology and Science
Hi-index | 0.00 |
Selecting a foundational platform is an important step in developing recommender systems for personal, research, or commercial purposes. This can be done in many different ways the platform may be developed from the ground up, an existing recommender engine may be contracted (OracleAS Personalization), code libraries can be adapted, or a platform may be selected and tailored to suit (LensKit, MymediaLite, Apache Mahout, etc.). In some cases, a combination of these approaches will be employed. For E-commerce projects, and particularly in the E-commerce website t, the ideal situation is to find an open-source platform with many active contributors that provides a rich and varied set of recommender system functions that meets all or most of the baseline development requirements. Short of finding this ideal solution, some minor customization to an already existing system may be the best approach to meet the specific development requirements. Various libraries have been released to support the development of recommender systems for some time, but it is only relatively recently that larger scale, open-source platforms have become readily available. In the context of such platforms, evaluation tools are important both to verify and validate baseline platform functionality, as well as to provide support for testing new techniques and approaches developed on top of the platform. Apache Mahout as an enabling platform for research and have faced both of these issues in employing it as part of work in collaborative filtering recommenders.