The 2007 AMI(DA) System for Meeting Transcription

  • Authors:
  • Thomas Hain;Lukas Burget;John Dines;Giulia Garau;Martin Karafiat;David Leeuwen;Mike Lincoln;Vincent Wan

  • Affiliations:
  • Department of Computer Science, University of Sheffield, Sheffield, UK S1 4DP;Faculty of Information Engineering, Brno University of Technology, Brno, Czech Republic 612 66;IDIAP Research Institute, Martigny, Switzerland CH-1920;Centre for Speech Technology Research, University of Edinburgh, Edinburgh, UK EH8 9LW;Faculty of Information Engineering, Brno University of Technology, Brno, Czech Republic 612 66;TNO, Delft, The Netherlands 2600 AD;Centre for Speech Technology Research, University of Edinburgh, Edinburgh, UK EH8 9LW and IDIAP Research Institute, Martigny, Switzerland CH-1920;Department of Computer Science, University of Sheffield, Sheffield, UK S1 4DP

  • Venue:
  • Multimodal Technologies for Perception of Humans
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Meeting transcription is one of the main tasks for large vocabulary automatic speech recognition (ASR) and is supported by several large international projects in the area. The conversational nature, the difficult acoustics, and the necessity of high quality speech transcripts for higher level processing make ASR of meeting recordings an interesting challenge. This paper describes the development and system architecture of the 2007 AMIDA meeting transcription system, the third of such systems developed in a collaboration of six research sites. Different variants of the system participated in all speech to text transcription tasks of the 2007 NIST RT evaluations and showed very competitive performance. The best result was obtained on close-talking microphone data where a final word error rate of 24.9% was obtained.