Statistical analysis with missing data
Statistical analysis with missing data
Microdata Protection through Noise Addition
Inference Control in Statistical Databases, From Theory to Practice
Adjusting survey weights when altering identifying design variables via synthetic data
PSD'06 Proceedings of the 2006 CENEX-SDC project international conference on Privacy in Statistical Databases
Random Forests for Generating Partially Synthetic, Categorical Data
Transactions on Data Privacy
Using support vector machines for generating synthetic datasets
PSD'10 Proceedings of the 2010 international conference on Privacy in statistical databases
Synthetic data for small area estimation
PSD'10 Proceedings of the 2010 international conference on Privacy in statistical databases
Disclosure risk of synthetic population data with application in the case of EU-SILC
PSD'10 Proceedings of the 2010 international conference on Privacy in statistical databases
Transactions on Data Privacy
Hi-index | 0.00 |
For datasets considered for public release, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data for scientific use on the other hand. For that reason a variety of methods that address this problem can be found in the literature. In this paper we discuss the advantages and disadvantages of two approaches that pro-vide disclosure control by generating synthetic datasets: The first, proposed by Rubin [1], generates fully synthetic datasets while the second suggested by Little [2] imputes values only for selected variables that bear a high risk of disclosure. Changing only some variables in general will lead to higher analytical validity. However, the disclosure risk will also increase for partially synthetic data, since true values remain in the data-sets. Thus, agencies willing to release synthetic datasets will have to decide, which of the two methods balances best the trade-off between data utility and disclosure risk for their data. We offer some guidelines to help making this decision. To our knowledge, the two approaches never haven been empirically compared in the literature so far. We apply the two methods to a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate their quality by comparing results from the original data with results we achieve for the same analyses run on the datasets after the imputation procedures. The results are as expected: In both cases the analytical validity of the synthetic data is high with partially synthetic datasets outperforming fully synthetic datasets in terms of data utility. But this advantage comes at the price of a higher disclosure risk for the partially synthetic data.