Some observations on the "live" collaborative tagging of audio conferences in the enterprise

  • Authors:
  • Shreeharsh Kelkar;Ajita John;Doree Duncan Seligmann

  • Affiliations:
  • Avaya Labs Research, Basking Ridge, NJ, USA;Avaya Labs Research, Basking Ridge, NJ, USA;Avaya Labs Research, Basking Ridge, NJ, USA

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper describes preliminary findings related to a system for "live" collaborative tagging of enterprise meetings taking place on an audio bridge between distributed participants. Participants can apply tags to different points of the interaction as it is ongoing and can see, in near real-time, the "flow" of tags as they are being contributed. Two novel types of tags are proposed: "deep tags" that apply to a portion of the interaction and "instant tags" that apply to an instant of the interaction. Our system is being used by enterprise users and we analyze a corpus of 737 live-tags collected from 16 conversations that took place over several months. We found that the live-tags for audio have slightly different characteristics from Web 2.0 tags: they are longer and confer affordances on the audio like description and summarization. Some observations on the "cognitive cost" of live-tagging are offered.