Dynamic security policy learning

  • Authors:
  • Yow Tzu Lim;Pau-Chen Cheng;Pankaj Rohatgi;John A. Clark

  • Affiliations:
  • Univeristy of York, York, United Kingdom;IBM Research, Hawthorne, NY, USA;Cryptography Research, San Francisco, CA, USA;University of York, York, United Kingdom

  • Venue:
  • Proceedings of the first ACM workshop on Information security governance
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent research [12] has suggested that traditional top down security policy models are too rigid to cope with changes in dynamic operational environments. There is a need for greater flexibility in security policies to protect information appropriately and yet still satisfy operational needs. Previous work has shown that security policies can be learnt from examples using machine learning techniques. Given a set of criteria of concern, one can apply these techniques to learn the policy that best fits the criteria. These criteria can be expressed in terms of high level objectives, or characterised by the set of previously seen decision examples. We argue here that even if an optimal policy could be learnt automatically, it will eventually become sub-optimal over time as the operational requirements change. The policy needs to be updated continually to maintain its optimality. This paper proposes two dynamic security policy learning frameworks