User-controllable learning of security and privacy policies

  • Authors:
  • Patrick Gage Kelley;Paul Hankes Drielsma;Norman Sadeh;Lorrie Faith Cranor

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA;Carnegie Mellon University, Pittsburgh, PA, USA

  • Venue:
  • Proceedings of the 1st ACM workshop on Workshop on AISec
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Studies have shown that users have great difficulty specifying their security and privacy policies in a variety of application domains. While machine learning techniques have successfully been used to refine models of user preferences, such as in recommender systems, they are generally configured as "black boxes" that take control over the entire policy and severely restrict the ways in which the user can manipulate it. This article presents an alternative approach, referred to as user-controllable policy learning. It involves the incremental manipulation of policies in a context where system and user refine a common policy model. The user regularly provides feedback on decisions made based on the current policy. This feedback is used to identify (learn) incremental policy improvements which are presented as suggestions to the user. The user, in turn, can review these suggestions and decide which, if any, to accept. The incremental nature of the suggestions enhances usability, and because the user and the system manipulate a common policy representation, the user retains control and can still make policy modifications by hand. Results obtained using a neighborhood search implementation of this approach are presented in the context of data derived from the deployment of a friend finder application, where users can share their locations with others, subject to privacy policies they refine over time. We present results showing policy accuracy, which averages 60% upon initial definition by our users climbing as high as 90% using our technique.