The nature of statistical learning theory
The nature of statistical learning theory
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Experimenting with music taste prediction by user profiling
Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval
Automatic generation of personalized human avatars from multi-view video
Proceedings of the ACM symposium on Virtual reality software and technology
Music organisation using colour synaesthesia
CHI '07 Extended Abstracts on Human Factors in Computing Systems
FOAFing the music: Bridging the semantic gap in music recommendation
Web Semantics: Science, Services and Agents on the World Wide Web
Music-driven character animation
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
From Low-Level to High-Level: Comparative Study of Music Similarity Measures
ISM '09 Proceedings of the 2009 11th IEEE International Symposium on Multimedia
Indexing music by mood: design and integration of an automatic content-based annotator
Multimedia Tools and Applications
A virtual world prototype for interacting with a music collection
OCSC'11 Proceedings of the 4th international conference on Online communities and social computing
Information Processing and Management: an International Journal
Hi-index | 0.00 |
The music we like (i.e. our musical preferences) encodes and communicates key information about ourselves. Depicting such preferences in a condensed and easily understandable way is very appealing, especially considering the current trends in social network communication. In this paper we propose a method to automatically generate, given a provided set of preferred music tracks, an iconic representation of a user's musical preferences -- the Musical Avatar. Starting from the raw audio signal we first compute over 60 low-level audio features. Then, by applying pattern recognition methods, we infer a set of semantic descriptors for each track in the collection. Next, we summarize these track-level semantic descriptors, obtaining a user profile. Finally, we map this collection-wise description to the visual domain by creating a humanoid cartoony character that represents the user's musical preferences. We performed a proof-of-concept evaluation of the proposed method on 11 subjects with promising results. The analysis of the users' evaluations shows a clear preference for avatars generated by the proposed semantic descriptors over avatars derived from neutral or randomly generated values. We also found a general agreement on the representativeness of the users' musical preferences via the proposed visualization strategy.