Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A tutorial on support vector regression
Statistics and Computing
Experiments with AdaBoost.RT, an improved boosting scheme for regression
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
A Regression Approach to Music Emotion Recognition
IEEE Transactions on Audio, Speech, and Language Processing
Automatic mood detection and tracking of music audio signals
IEEE Transactions on Audio, Speech, and Language Processing
Improving music emotion labeling using human computation
Proceedings of the ACM SIGKDD Workshop on Human Computation
A musical mood trajectory estimation method using lyrics and acoustic features
MIRUM '11 Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies
Machine Recognition of Music Emotion: A Review
ACM Transactions on Intelligent Systems and Technology (TIST)
Mood tracking of musical compositions
ISMIS'12 Proceedings of the 20th international conference on Foundations of Intelligent Systems
Human-centric music medical therapy exploration system
Proceedings of the 2013 ACM SIGCOMM workshop on Future human-centric multimedia networking
1000 songs for emotional analysis of music
Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
Hi-index | 0.00 |
In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoustic signal of music. To address this problem, we must develop models for both data collected from humans describing their perceptions of musical mood and quantitative features derived from the audio signal. In previous work, we have presented a collaborative game, MoodSwings, which records dynamic (per-second) mood ratings from multiple players within the two-dimensional Arousal-Valence representation of emotion. Using this data, we present a system linking models of acoustic features and human data to provide estimates of the emotional content of music according to the arousal-valence space. Furthermore, in keeping with the dynamic nature of musical mood we demonstrate the potential of this approach to track the emotional changes in a song over time. We investigate the utility of a range of acoustic features based on psychoacoustic and music-theoretic representations of the audio for this application. Finally, a simplified version of our system is re-incorporated into MoodSwings as a simulated partner for single-players, providing a potential platform for furthering perceptual studies and modeling of musical mood.