A reinforcement model for collaborative security and Its formal analysis

  • Authors:
  • Janardan Misra;Indranil Saha

  • Affiliations:
  • HTS Research, Bangalore, India;University of California, Los Angeles, CA

  • Venue:
  • NSPW '09 Proceedings of the 2009 workshop on New security paradigms workshop
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a principled approach to one of the many little studied aspects of computer security which relate to human behavior. Advantages of involving users who usually have strong analytic ability to detect violations and threats but not primarily responsible for security have been well emphasized in the literature. In this work we propose a reinforcement framework for enabling collaborative monitoring of policy violations by the users. We define a payoff model to formalize the reinforcement framework. The model stipulates appropriate payoffs as reward, punishment, and community price according to reporting of genuine or false violations, non-reporting of the detected violations, and proactive reporting of vulnerabilities and threats by the users. We define probabilistic robustness property of the resulting system and constraints for economic feasibility of the payoffs. For estimating the parameters in the payoff model, system and user behaviors are modeled in terms of probabilistic finite state machines (PFSM) and likelihood of the success of the model is specified using Probabilistic Computation Tree Logic (PCTL). PRISM model checker based automated quantitative analysis elicits the process of the estimation of various parameters in the model using PFSMs and PCTL formulas.