Lagrange multipliers and maximum information leakage in different observational models

  • Authors:
  • Pasquale Malacaria;Han Chen

  • Affiliations:
  • Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom

  • Venue:
  • Proceedings of the third ACM SIGPLAN workshop on Programming languages and analysis for security
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper explores two fundamental issues in Language based security. The first is to provide a quantitative definition of information leakage valid in several attacker's models. We consider attackers with different capabilities; the strongest one is able to observe the value of the low variables at each step during the execution of a program; the weakest one can only observe a single low value at some stage of the execution. We will provide a uniform definition of leakage, based on Information Theory, that will allow us to formalize and prove some intuitive relationships between the amount leaked by the same program in different models. The second issue is Channel Capacity, which in security terms amounts to answering the questions: given a program and an observational model, what is the maximum amount that the program can leak? And which input distribution causes the maximum leakage? To answer those questions we will introduce techniques from constrained non-linear optimization, mainly Lagrange multipliers and we will show how they provide a workable solution in all observational models considered. In the simplest setting, i.e. under minimal constraints, we will show that channel capacity is achieved by any input distribution which induces a uniform distribution on the observables.