Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness

  • Authors:
  • Bamshad Mobasher;Robin Burke;Runa Bhaumik;Chad Williams

  • Affiliations:
  • DePaul University, Chicago, IL;DePaul University, Chicago, IL;DePaul University, Chicago, IL;DePaul University, Chicago, IL

  • Venue:
  • ACM Transactions on Internet Technology (TOIT)
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Publicly accessible adaptive systems such as collaborative recommender systems present a security problem. Attackers, who cannot be readily distinguished from ordinary users, may inject biased profiles in an attempt to force a system to “adapt” in a manner advantageous to them. Such attacks may lead to a degradation of user trust in the objectivity and accuracy of the system. Recent research has begun to examine the vulnerabilities and robustness of different collaborative recommendation techniques in the face of “profile injection” attacks. In this article, we outline some of the major issues in building secure recommender systems, concentrating in particular on the modeling of attacks and their impact on various recommendation algorithms. We introduce several new attack models and perform extensive simulation-based evaluations to show which attacks are most successful and practical against common recommendation techniques. Our study shows that both user-based and item-based algorithms are highly vulnerable to specific attack models, but that hybrid algorithms may provide a higher degree of robustness. Using our formal characterization of attack models, we also introduce a novel classification-based approach for detecting attack profiles and evaluate its effectiveness in neutralizing attacks.