A speech interface for building musical score collections
DL '00 Proceedings of the fifth ACM conference on Digital libraries
SmartMusicKIOSK: music listening station with chorus-search function
Proceedings of the 16th annual ACM symposium on User interface software and technology
LyricAlly: automatic synchronization of acoustic musical signals and textual lyrics
Proceedings of the 12th annual ACM international conference on Multimedia
A real time signal processing technique for MIDI generation
CSECS'08 Proceedings of the 7th conference on Circuits, systems, electronics, control and signal processing
ICOST'11 Proceedings of the 9th international conference on Toward useful services for elderly and people with disabilities: smart homes and health telematics
Hummi-com: humming-based music composition system
Proceedings of the 20th ACM international conference on Multimedia
Hi-index | 0.00 |
In this paper, we propose a robust Voice-to-MIDI (V to M) system with which a user can input MIDI sequence data by naturally singing melodies with lyrics. A Voice-to-MIDI system translates singing voices into digital musical data, i.e., MIDI sequence data. Therefore, with such a system, users can input melodies intuitively, which releases them from manual translating memorized melodies into chromatic pitches. However, the quality of translation of ordinary Voice-to-MIDI systems is insufficient. One of the most significant problems is the poor accuracy of the segmentation of notes. We solve this problem by employing "rhythmic tapping" concurrently with singing. We examined the proposed method by the accuracy of the numbers of segmented notes and their pitches. As a result, we confirmed that our system outperformed ordinary Voice-to-MIDI systems. Thus, this system satisfies both of easy and intuitive composition of MIDI sequence data and high accuracy of translation of sung data into MIDI sequence data.