Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
The Journal of Machine Learning Research
Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications
IEEE Transactions on Affective Computing
Prediction of Time-Varying Musical Mood Distributions Using Kalman Filtering
ICMLA '10 Proceedings of the 2010 Ninth International Conference on Machine Learning and Applications
The organization of home media
ACM Transactions on Computer-Human Interaction (TOCHI)
Music Emotion Recognition
A Regression Approach to Music Emotion Recognition
IEEE Transactions on Audio, Speech, and Language Processing
Automatic mood detection and tracking of music audio signals
IEEE Transactions on Audio, Speech, and Language Processing
Modeling emotional content of music using system identification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Prediction of the Distribution of Perceived Music Emotions Using Discrete Samples
IEEE Transactions on Audio, Speech, and Language Processing
Exploring the relationship between categorical and dimensional emotion semantics of music
Proceedings of the second international ACM workshop on Music information retrieval with user-centered and multimodal strategies
The acousticvisual emotion guassians model for automatic generation of music video
Proceedings of the 20th ACM international conference on Multimedia
1000 songs for emotional analysis of music
Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
Hi-index | 0.00 |
One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.