On bounding problems of quantitative information flow
ESORICS'10 Proceedings of the 15th European conference on Research in computer security
Quantifying information leaks in software
Proceedings of the 26th Annual Computer Security Applications Conference
From boolean to quantitative synthesis
EMSOFT '11 Proceedings of the ninth ACM international conference on Embedded software
Secure information flow by self-composition
Mathematical Structures in Computer Science - Programming Language Interference and Dependence
Quantitative information flow: from theory to practice?
CAV'10 Proceedings of the 22nd international conference on Computer Aided Verification
Calculating bounds on information leakage using two-bit patterns
Proceedings of the ACM SIGPLAN 6th Workshop on Programming Languages and Analysis for Security
A Kantorovich-Monadic Powerdomain for Information Hiding, with Probability and Nondeterminism
LICS '12 Proceedings of the 2012 27th Annual IEEE/ACM Symposium on Logic in Computer Science
SAT-Based analysis and quantification of information flow in programs
QEST'13 Proceedings of the 10th international conference on Quantitative Evaluation of Systems
On bounding problems of quantitative information flow
Journal of Computer Security - ESORICS 2010
Hi-index | 0.00 |
Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, and channel capacity. This paper investigates the hardness and possibilities of precisely checking and inferring quantitative information flow according to such definitions. We prove that, even for just comparing two programs on which has the larger flow, none of the definitions is a k-safety property for any k, and therefore is not amenable to the self-composition technique that has been successfully applied to precisely checking non-interference. We also show a complexity theoretic gap with non-interference by proving that, for loop-free boolean programs whose non-interference is coNP-complete, the comparison problem is #P-hard for all of the definitions. For positive results, we show that universally quantifying the distribution in the comparison problem, that is, comparing two programs according to the entropy based definitions on which has the larger flow for all distributions, is a 2-safety problem in general and is coNP-complete when restricted for loop-free boolean programs. We prove this by showing that the problem is equivalent to a simple relation naturally expressing the fact that one program is more secure than the other. We prove that the relation also refines the channel-capacity based definition, and that it can be precisely checked via the self-composition as well as the “interleaved” self-composition technique.