Automatic Segmentation of Acoustic Musical Signals Using Hidden Markov Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Deformable Markov model templates for time-series pattern matching
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Spectromorphology: explaining sound-shapes
Organised Sound
The augmented violin project: research, composition and performance report
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Online Handwritten Shape Recognition Using Segmental Hidden Markov Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic transcription of melody, bass line, and chords in polyphonic music
Computer Music Journal
A Coupled Duration-Focused Architecture for Real-Time Music-to-Score Alignment
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.10 |
We present a method to model sound descriptor temporal profiles using segmental models. Unlike standard HMM, such an approach allows for the modeling of fine structures of temporal profiles with a reduced number of states. These states, we called primitives, can be chosen by the user using prior knowledge, and assembled to model symbolic musical elements. In this paper, we describe this general methodology and evaluate it on a dataset made of violin recording containing crescendo/decrescendo, glissando and sforzando. The results show that, in this context, the segmental model can segment and recognize these different musical elements with a satisfactory level.