VACE multimodal meeting corpus

  • Authors:
  • Lei Chen;R. Travis Rose;Ying Qiao;Irene Kimbara;Fey Parrill;Haleema Welji;Tony Xu Han;Jilin Tu;Zhongqiang Huang;Mary Harper;Francis Quek;Yingen Xiong;David McNeill;Ronald Tuttle;Thomas Huang

  • Affiliations:
  • School of Electrical Engineering, Purdue University, West Lafayette, IN;CHCI, Department of Computer Science, Virginia Tech, Blacksburg, VA;CHCI, Department of Computer Science, Virginia Tech, Blacksburg, VA;Department of Psychology, University of Chicago, Chicago, IL;Department of Psychology, University of Chicago, Chicago, IL;Department of Psychology, University of Chicago, Chicago, IL;Beckman Institute, University of Illinois Urbana Champaign, Urbana, IL;Beckman Institute, University of Illinois Urbana Champaign, Urbana, IL;School of Electrical Engineering, Purdue University, West Lafayette, IN;School of Electrical Engineering, Purdue University, West Lafayette, IN;CHCI, Department of Computer Science, Virginia Tech, Blacksburg, VA;CHCI, Department of Computer Science, Virginia Tech, Blacksburg, VA;Department of Psychology, University of Chicago, Chicago, IL;Air Force Institute of Technology, Dayton, OH;Beckman Institute, University of Illinois Urbana Champaign, Urbana, IL

  • Venue:
  • MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.