Analysis of Rachmaninoff's Piano Performances Using Inductive Logic Programming (Extended Abstract)
ECML '95 Proceedings of the 8th European Conference on Machine Learning
In search of the Horowitz factor
AI Magazine
Automatic identification of music performers with learning ensembles
Artificial Intelligence
A case based approach to expressivity-aware tempo transformation
Machine Learning
Using string kernels to identify famous performers from their playing style
Intelligent Data Analysis
Swipe: a sawtooth waveform inspired pitch estimator for speech and music
Swipe: a sawtooth waveform inspired pitch estimator for speech and music
Performance-Based Interpreter Identification in Saxophone Audio Recordings
IEEE Transactions on Circuits and Systems for Video Technology
Hi-index | 0.00 |
Understanding the way performers use expressive resources of a given instrument to communicate with the audience is a challenging problem in the sound and music computing field. Working directly with commercial recordings is a good opportunity for tackling this implicit knowledge and studying well-known performers. The huge amount of information to be analyzed suggests the use of automatic techniques, which have to deal with imprecise analysis and manage the information in a broader perspective. This work presents a new approach, Trend-based modeling, for identifying professional performers in commercial recordings. Concretely, starting from automatically extracted descriptors provided by state-of-the-art tools, our approach performs a qualitative analysis of the detected trends for a given set of melodic patterns. The feasibility of our approach is shown for a dataset of monophonic violin recordings from 23 well-known performers.