Techniques for addressing fundamental privacy and disruption tradeoffs in awareness support systems
CSCW '96 Proceedings of the 1996 ACM conference on Computer supported cooperative work
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Evaluating image filtering based techniques in media space applications
CSCW '98 Proceedings of the 1998 ACM conference on Computer supported cooperative work
Perceptual user interfaces: things that see
Communications of the ACM
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
The effects of filtered video on awareness and privacy
CSCW '00 Proceedings of the 2000 ACM conference on Computer supported cooperative work
k-anonymity: a model for protecting privacy
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Face Recognition in a Meeting Room
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Face recognition: A literature survey
ACM Computing Surveys (CSUR)
Preserving Privacy by De-Identifying Face Images
IEEE Transactions on Knowledge and Data Engineering
Blur filtration fails to preserve privacy for home-based video conferencing
ACM Transactions on Computer-Human Interaction (TOCHI)
Factors on the sense of privacy in video surveillance
Proceedings of the 3rd ACM workshop on Continuous archival and retrival of personal experences
Person de-identification in videos
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part III
Hi-index | 0.00 |
With the proliferation of inexpensive video surveillance and face recognition technologies, it is increasingly possible to track and match people as they move through public spaces. To protect the privacy of subjects visible in video sequences, prior research suggests using ad hoc obfuscation methods, such as blurring or pixelation of the face. However, there has been little investigation into how obfuscation influences the usability of images, such as for classification tasks. In this paper, we demonstrate that at high obfuscation levels, ad hoc methods fail to preserve utility for various tasks, whereas at low obfuscation levels, they fail to prevent recognition. To overcome the implied tradeoff between privacy and utility, we introduce a new algorithm, k-Same-Select, which is a formal privacy protection schema based on k-anonymity that provably protects privacy and preserves data utility. We empirically validate our findings through evaluations on the FERET database, a large real world dataset of facial images.