Emotion Recognition of Pop Music Based on Maximum Entropy with Priors

  • Authors:
  • Hui He;Bo Chen;Jun Guo

  • Affiliations:
  • School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, P.R. China 100876;School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, P.R. China 100876;School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, P.R. China 100876

  • Venue:
  • PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Efficient and intelligent music retrieval has become a very important topic nowadays. Analysis of lyrics must be a complement of acoustic methods for music retrieval. One basic aspect of music retrieval is music emotion recognition by learning from lyrics. This problem is different from traditional text classification in that more linguistic or semantic information is required for better emotion analysis. Thereby, we focus on how to extract meaningful features and how to modeling them for music emotion recognition. First, we investigate the lyrics corpus based on Zipf's Law using word as a unit, and results roughly obey Zipf's Law. Then, we study three kinds of preprocessing methods and a series of language grams under the well-known n-gram language model framework to extract more semantic features. At last, we employ three supervised learning methods, Naïve Bayes, maximum entropy classification, and support vector machine, to examine the classification performance. Besides that, we also improve ME with Gaussian and Laplace priors to model features for music emotion recognition. Experiment al results show that feature extraction methods improved music emotion recognition accuracy. ME with priors obtained the best.