Audio classification based on maximum entropy model

  • Authors:
  • Zhe Feng;Yaqian Zhou;Lide Wu;Zongge Li

  • Affiliations:
  • Dept. of Comput. Sci. & Eng., Fudan Univ., Shanghai, China;Dept. of Comput. Sci. & Eng., Fudan Univ., Shanghai, China;Dept. of Comput. Sci. & Eng., Fudan Univ., Shanghai, China;Dept. of Comput. Sci. & Eng., Fudan Univ., Shanghai, China

  • Venue:
  • ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Audio classification has been investigated for several years. It is one of the key components in audio and video applications. In prior work, the accuracy under complicated condition is not satisfactory enough and the results highly depend on the dataset. In this paper, we present a novel audio classification method based on maximum entropy model. By applying this method on some widely used features, different feature combinations are considered during model training and a better performance can be achieved. When evaluated it in TREC 2002 Video Track's speech/music feature extraction task, this method works well for both speech and music among participated systems.