Performance analysis for lattice-based speech indexing approaches using words and subword units

  • Authors:
  • Yi-Cheng Pan;Lin-Shan Lee

  • Affiliations:
  • MediaTek, Inc., Hsinchu, Taiwan and Graduate Institute of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan;Graduate Institute of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Lattice-based speech indexing approaches are attractive for the combination of short spoken segments, short queries, and low automatic speech recognition (ASR) accuracies, as lattices provide recognition alternatives and therefore tend to compensate for recognition errors. Position-specific posterior lattices (PSPLs) and confusion networks (CNs), two of the most popular lattice-based approaches, both reduce disk space requirements and are more efficient than raw lattices. When PSPLs and CNs are used in a word-based fashion, they cannot handle OOV or rare word queries. In this paper, we propose an efficient approach for the construction of subword-based PSPLs (S-PSPLs) and CNs (S-CNs) and present a comprehensive performance analysis of PSPL and CN structures using both words and subword units, taking into account basic principles and structures, and supported by experimental results on Mandarin Chinese. S-PSPLs and S-CNs are shown to yield significant mean average precision (MAP) improvements over word-based PSPLs and CNs for both out-of-vocabulary (OOV) and in-vocabulary queries while requiring much less disk space for indexing.