A spatiotemporal model of strategies and counter strategies for location privacy protection

  • Authors:
  • Matt Duckham;Lars Kulik;Athol Birtley

  • Affiliations:
  • Department of Geomatics, University of Melbourne, Victoria, Australia;Department of Computer Science and Software Engineering, University of Melbourne, Victoria, Australia;Department of Computer Science and Software Engineering, University of Melbourne, Victoria, Australia

  • Venue:
  • GIScience'06 Proceedings of the 4th international conference on Geographic Information Science
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Safeguarding location privacy is becoming a critical issue in location-based services and location-aware computing generally. Two drawbacks of many previous models of location privacy are: 1) the models only consider a person's location privacy protection, but not the invasion of location privacy by external agents; and 2) the models are static and do not consider the spatiotemporal aspects of movement. We argue that, to be complete, any model of location privacy needs to enable the analysis and identification of techniques both to protect and to invade an individual's location privacy over time. One way to protect an individual's location privacy is to minimize the information revealed about a person's location, termed obfuscation. This paper presents an explicitly spatiotemporal model of location privacy that models a third party's limited knowledge of a mobile individual's location. We identify two core strategies that a third party can use to refine its knowledge, so potentially invading that mobile individual's location privacy. A global refinement strategy uses the entire history of knowledge about an agent's location in a single step. A local refinement strategy iteratively constructs refined knowledge over time. We present a formal model of global and local refinement operators, and show how this formal model can be translated into a computational model in a simulation environment.