sMFCC: exploiting sparseness in speech for fast acoustic feature extraction on mobile devices -- a feasibility study

  • Authors:
  • Shahriar Nirjon;Robert Dickerson;John Stankovic;Guobin Shen;Xiaofan Jiang

  • Affiliations:
  • University of Virginia;University of Virginia;University of Virginia;Microsoft Research Asia, Beijing, China;Intel Labs China, Beijing, China

  • Venue:
  • Proceedings of the 14th Workshop on Mobile Computing Systems and Applications
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Due to limited processing capability, contemporary smartphones cannot extract frequency domain acoustic features in real-time on the device when the sampling rate is high. We propose a solution to this problem which exploits the sparseness in speech to extract frequency domain acoustic features inside a smartphone in real-time, without requiring any support from a remote server even when the sampling rate is as high as 44.1 KHz. We perform an empirical study to quantify the sparseness in speech recorded on a smartphone and use it to obtain a highly accurate and sparse approximation of a widely used feature of speech called the Mel-Frequency Cepstral Coefficients (MFCC) efficiently. We name the new feature the sparse MFCC or sMFCC, in short. We experimentally determine the trade-offs between the approximation error and the expected speedup of sMFCC. We implement a simple spoken word recognition application using both MFCC and sMFCC features, show that sMFCC is expected to be up to 5.84 times faster and its accuracy is within 1.1% -- 3.9% of that of MFCC, and determine the conditions under which sMFCC runs in real-time.