On the Privacy Risks of Virtual Keyboards: Automatic Reconstruction of Typed Input from Compromising Reflections

  • Authors:
  • Rahul Raguram;Andrew M. White;Yi Xu;Jan-Michael Frahm;Pierre Georgel;Fabian Monrose

  • Affiliations:
  • University of North Carolina, Chapel Hill;University of North Carolina, Chapel Hill;University of North Carolina, Chapel Hill;University of North Carolina, Chapel Hill;University of North Carolina, Chapel Hill;University of North Carolina, Chapel Hill

  • Venue:
  • IEEE Transactions on Dependable and Secure Computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We investigate the implications of the ubiquity of personal mobile devices and reveal new techniques for compromising the privacy of users typing on virtual keyboards. Specifically, we show that so-called compromising reflections (in, for example, a victim's sunglasses) of a device's screen are sufficient to enable automated reconstruction, from video, of text typed on a virtual keyboard. Through the use of advanced computer vision and machine learning techniques, we are able to operate under extremely realistic threat models, in real-world operating conditions, which are far beyond the range of more traditional OCR-based attacks. In particular, our system does not require expensive and bulky telescopic lenses: rather, we make use of off-the-shelf, handheld video cameras. In addition, we make no limiting assumptions about the motion of the phone or of the camera, nor the typing style of the user, and are able to reconstruct accurate transcripts of recorded input, even when using footage captured in challenging environments (e.g., on a moving bus). To further underscore the extent of this threat, our system is able to achieve accurate results even at very large distances—up to 61 m for direct surveillance, and 12 m for sunglass reflections. We believe these results highlight the importance of adjusting privacy expectations in response to emerging technologies.