Using collaborative filtering to weave an information tapestry
Communications of the ACM - Special issue on information filtering
GroupLens: an open architecture for collaborative filtering of netnews
CSCW '94 Proceedings of the 1994 ACM conference on Computer supported cooperative work
Collaborative filtering with privacy via factor analysis
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Collaborative Filtering with Privacy
SP '02 Proceedings of the 2002 IEEE Symposium on Security and Privacy
TrustMe: Anonymous Management of Trust Relationships in Decentralized P2P Systems
P2P '03 Proceedings of the 3rd International Conference on Peer-to-Peer Computing
Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
Shilling recommender systems for fun and profit
Proceedings of the 13th international conference on World Wide Web
'I didn't buy it for myself' privacy and ecommerce personalization
Proceedings of the 2003 ACM workshop on Privacy in the electronic society
Proceedings of the 10th international conference on Intelligent user interfaces
Improving recommendation lists through topic diversification
WWW '05 Proceedings of the 14th international conference on World Wide Web
A robust data obfuscation approach for privacy preserving collaborative filtering
A robust data obfuscation approach for privacy preserving collaborative filtering
PP-trust-X: A system for privacy preserving trust negotiations
ACM Transactions on Information and System Security (TISSEC)
Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness
ACM Transactions on Internet Technology (TOIT)
Controversial users demand local trust metrics: an experimental study on Epinions.com community
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Social Trust-Aware Recommendation System: A T-Index Approach
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
Average Shilling Attack against Trust-Based Recommender Systems
ICIII '09 Proceedings of the 2009 International Conference on Information Management, Innovation Management and Industrial Engineering - Volume 04
Mechanizing social trust-aware recommenders with T-index augmented trustworthiness
TrustBus'10 Proceedings of the 7th international conference on Trust, privacy and security in digital business
Empirical analysis of predictive algorithms for collaborative filtering
UAI'98 Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Collaborative filtering (CF) recommenders are subject to numerous shortcomings such as centralized processing, vulnerability to shilling attacks, and most important of all privacy. To overcome these obstacles, researchers proposed for utilization of interpersonal trust between users, to alleviate many of these crucial shortcomings. Till now, attention has been mainly paid to strong points about trust-aware recommenders such as alleviating profile sparsity or calculation cost efficiency, while least attention has been paid on investigating the notion of privacy surrounding the disclosure of individual ratings and most importantly protection of trust computation across social networks forming the backbone of these systems. To contribute to addressing problem of privacy in trust-aware recommenders, within this paper, first we introduce a framework for enabling privacy-preserving trust-aware recommendation generation. While trust mechanism aims at elevating recommenders accuracy, to preserve privacy, accuracy of the system needs to be decreased. Since within this context, privacy and accuracy are conflicting goals we show that a Pareto set can be found as an optimal setting for both privacy-preserving and trust-enabling mechanisms. We show that this Pareto set, when used as the configuration for measuring the accuracy of base collaborative filtering engine, yields an optimized tradeoff between conflicting goals of privacy and accuracy. We prove this concept along with applicability of our framework by experimenting with accuracy and privacy factors, and we show through experiment how such optimal set can be inferred.