Turn-by-turn directions go social

  • Authors:
  • Thomas Sandholm;Hang Maxime Ung

  • Affiliations:
  • HP Labs Palo Alto, CA;Ecole Polytechnique, Paris, France

  • Venue:
  • Proceedings of Interacting with Sound Workshop: Exploring Context-Aware, Local and Social Audio Applications
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we present the implementation of a system that allows audio-based real-time coordination of a group of users with mobile devices. Use cases include, real-time meeting point coordination, im-to-voice communication, and social sports tracking. The assumption is that at least one person in the group is able to easily enter text from a keyboard-like control, e.g. from a desktop, PC or tablet. This person, who we call the coordinator, can then communicate with one or more people, called operatives, with mobile devices and engaged in an activity that makes it hard or impossible for them to see the screen of the device and to use touch-based input mechanisms. Examples include driving, running, biking, and walking. We scope our work to only look at use cases where the operatives never have to provide any explicit input back to the coordinator apart from automatically detected device properties, such as geolocation. A secondary goal is that the only system requirement both for the coordinator and the operatives is a browser capable of rendering HTML 5 content to allow coordination across a diverse fleet of devices. The main lesson learned from our work is that audio cues can be very useful in a mobile setting to convey system information, activity by friends, as well as direct possibly translated text-to-speech messages. Experiments show that our infrastructure can potentially handle up to 78 people submitting locations in real-time (every 10 seconds) to a coordinator within the same group.