Multilevel k-way partitioning scheme for irregular graphs
Journal of Parallel and Distributed Computing
Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior
Proceedings of the 2nd ACM conference on Electronic commerce
Collaborative filtering with privacy via factor analysis
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Promoting Recommendations: An Attack on Collaborative Filtering
DEXA '02 Proceedings of the 13th International Conference on Database and Expert Systems Applications
Shilling recommender systems for fun and profit
Proceedings of the 13th international conference on World Wide Web
An Evaluation of Neighbourhood Formation on the Performance of Collaborative Filtering
Artificial Intelligence Review
Preventing shilling attacks in online recommender systems
Proceedings of the 7th annual ACM international workshop on Web information and data management
Classification features for attack detection in collaborative recommender systems
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Lies and propaganda: detecting spam users in collaborative filtering
Proceedings of the 12th international conference on Intelligent user interfaces
Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness
ACM Transactions on Internet Technology (TOIT)
The influence limiter: provably manipulation-resistant recommender systems
Proceedings of the 2007 ACM conference on Recommender systems
Unsupervised retrieval of attack profiles in collaborative recommender systems
Proceedings of the 2008 ACM conference on Recommender systems
Unsupervised strategies for shilling detection and robust collaborative filtering
User Modeling and User-Adapted Interaction
Model-based collaborative filtering as a defense against profile injection attacks
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Analysis of robustness in trust-based recommender systems
RIAO '10 Adaptivity, Personalization and Fusion of Heterogeneous Information
Robustness of recommender systems
Proceedings of the fifth ACM conference on Recommender systems
Semi-SAD: applying semi-supervised learning to shilling attack detection
Proceedings of the fifth ACM conference on Recommender systems
HySAD: a semi-supervised hybrid shilling attack detector for trustworthy product recommendation
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
A hybrid decision approach to detect profile injection attacks in collaborative recommender systems
ISMIS'12 Proceedings of the 20th international conference on Foundations of Intelligent Systems
βP: A novel approach to filter out malicious rating profiles from recommender systems
Decision Support Systems
A belief propagation approach for detecting shilling attacks in collaborative filtering
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Hi-index | 0.00 |
It has been shown in recent years that effective profile injection or shilling attacks can be mounted on standard recommendation algorithms. These attacks consist of the insertion of bogus user profiles into the system database in order to manipulate the recommendation output, for example to promote or demote the predicted ratings for a particular product. A number of attack models have been proposed and some detection strategies to identify these attacks have been empirically evaluated. In this paper we show that the standard attack models can be readily detected using statistical detection techniques. We argue that insufficient consideration of the effectiveness of attacks under a constraint of statistical invariance has been taken in past research. In fact, it is possible to create effective attacks that are undetectable using the detection strategies proposed to date, including the PCA-based clustering strategy which has shown excellent performance against standard attacks. Nevertheless, these more advanced attacks can also be detected with careful design of a statistical detector. The question posed for future research is whether attack models that produce effective attack profiles that are statistically identical to genuine profiles are really possible.