k-anonymity: a model for protecting privacy
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
State-of-the-art in privacy preserving data mining
ACM SIGMOD Record
Checking for k-anonymity violation by views
VLDB '05 Proceedings of the 31st international conference on Very large data bases
\ell -Diversity: Privacy Beyond \kappa -Anonymity
ICDE '06 Proceedings of the 22nd International Conference on Data Engineering
Towards robustness in query auditing
VLDB '06 Proceedings of the 32nd international conference on Very large data bases
Proceedings of the 16th international conference on World Wide Web
Minimality attack in privacy preserving data publishing
VLDB '07 Proceedings of the 33rd international conference on Very large data bases
Mechanism Design via Differential Privacy
FOCS '07 Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science
Composition attacks and auxiliary information in data privacy
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Scientific Workflow Provenance Querying with Security Views
WAIM '08 Proceedings of the 2008 The Ninth International Conference on Web-Age Information Management
Towards semantics for provenance security
TAPP'09 First workshop on on Theory and practice of provenance
The Differential Privacy Frontier (Extended Abstract)
TCC '09 Proceedings of the 6th Theory of Cryptography Conference on Theory of Cryptography
Attacks on privacy and deFinetti's theorem
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
An Access Control Language for a General Provenance Model
SDM '09 Proceedings of the 6th VLDB Workshop on Secure Data Management
Transparent anonymization: Thwarting adversaries who know the algorithm
ACM Transactions on Database Systems (TODS)
Differential privacy: a survey of results
TAMC'08 Proceedings of the 5th international conference on Theory and applications of models of computation
Minimizing minimality and maximizing utility: analyzing method-based attacks on anonymized data
Proceedings of the VLDB Endowment
Proceedings of the 14th International Conference on Database Theory
A language for provenance access control
Proceedings of the first ACM conference on Data and application security and privacy
A quest for beauty and wealth (or, business processes for database researchers)
Proceedings of the thirtieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Provenance views for module privacy
Proceedings of the thirtieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data
Transforming provenance using redaction
Proceedings of the 16th ACM symposium on Access control models and technologies
Surrogate parenthood: protected and informative graphs
Proceedings of the VLDB Endowment
Differential Privacy via Wavelet Transforms
IEEE Transactions on Knowledge and Data Engineering
Personal privacy vs population privacy: learning to attack anonymization
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
A Formal Framework for Provenance Security
CSF '11 Proceedings of the 2011 IEEE 24th Computer Security Foundations Symposium
Having your cake and eating it too: routing security with privacy protections
Proceedings of the 10th ACM Workshop on Hot Topics in Networks
Security issues in a SOA-Based provenance system
IPAW'06 Proceedings of the 2006 international conference on Provenance and Annotation of Data
Calibrating noise to sensitivity in private data analysis
TCC'06 Proceedings of the Third conference on Theory of Cryptography
Foundations of data-aware process analysis: a database theory perspective
Proceedings of the 32nd symposium on Principles of database systems
Hi-index | 0.00 |
We study the problem of concealing functionality of a proprietary or private module when provenance information is shown over repeated executions of a workflow which contains both public and private modules. Our approach is to use provenance views to hide carefully chosen subsets of data over all executions of the workflow to ensure Γ-privacy: for each private module and each input x, the module's output f(x) is indistinguishable from Γ--1 other possible values given the visible data in the workflow executions. We show that Γ-privacy cannot be achieved simply by combining solutions for individual private modules; data hiding must also be propagated through public modules. We then examine how much additional data must be hidden and when it is safe to stop propagating data hiding. The answer depends strongly on the workflow topology as well as the behavior of public modules on the visible data. In particular, for a class of workflows (which include the common tree and chain workflows), taking private solutions for each private module, augmented with a public closure that is upstream-downstream safe, ensures Γ-privacy. We define these notions formally and show that the restrictions are necessary. We also study the related optimization problems of minimizing the amount of hidden data.