The sound of silence

  • Authors:
  • Wai-Tian Tan;Mary Baker;Bowon Lee;Ramin Samadani

  • Affiliations:
  • Cisco Systems;Hewlett-Packard Laboratories;Hewlett-Packard Laboratories;Qualcomm Technologies, Inc.

  • Venue:
  • Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

A list of the dynamically changing group membership of a meeting supports a variety of meeting-related activities. Effortless content sharing might be the most important application, but we can also use it to provide business card information for attendees, feed information into calendar applications to simplify scheduling of follow-up meetings, populate the membership of collaborative editing applications, mailing lists, and social networks, and perform many other tasks. We have developed a system that uses audio sensing to maintain meeting membership automatically. We choose audio since hearing the same conversation provides a human-centric notion of attending the same gathering. It takes into account walls and other sound barriers between otherwise closely situated people. It can sense participants attending remotely by teleconference. It does not require attendees to perform any explicit action when participants leave a meeting for which they should no longer have access to associated content. It works indoors and outdoors and does not require pre-populating databases with mapping information. For sensors, we require only the commonly available microphones on mobile devices. Our system exploits a new technique for matching sensed patterns of relative audio silence, or silence signatures, from mobile devices (mobile phones, tablets, laptops) to determine device co-location. A signature based on simple silence patterns rather than a detailed audio characterization reveals less information about the content of potentially private conversations and is also more robustly compared across devices that are not clock synchronized. We evaluate our method in formal indoor meetings and teleconferences and in ad hoc gatherings outdoors and in a noisy cafeteria. Across all our tests so far, our approach determines audio co-location with a worst-case accuracy of 96%, and recovery from these errors takes only a few seconds. We also describe a content sharing application supported by silence signature matching, the limitations of our approach, current status, and future plans.