An evaluation of earcons for use in auditory human-computer interfaces
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Query by Tapping: A New Paradigm for Content-Based Music Retrieval from Acoustic Input
PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
AUDITORY ICONS IN LARGE-SCALE COLLABORATIVE ENVIRONMENTS
ACM SIGCHI Bulletin
The Shazam music recognition service
Communications of the ACM - Music information retrieval
MPTrain: a mobile, music and physiology-based personal trainer
Proceedings of the 8th conference on Human-computer interaction with mobile devices and services
Earpod: eyes-free menu selection using touch input and reactive audio feedback
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Query by tapping system based on alignment algorithm
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
A music search engine for therapeutic gait training
Proceedings of the international conference on Multimedia
Context-Dependent Beat Tracking of Musical Audio
IEEE Transactions on Audio, Speech, and Language Processing
Analysis of the meter of acoustic musical signals
IEEE Transactions on Audio, Speech, and Language Processing
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Hi-index | 0.00 |
This paper presents TMSE: a novel Tempo-sensitive Music Search Engine with multimodal inputs for wellness and therapeutic applications. TMSE integrates six different interaction modes, Query-by-Number, Query-by-Sliding, Query-by-Example, Query-by-Tapping, Query-by-Clapping, and Query-by-Walking, into one single interface for narrowing the intention gap when a user searches for music by tempo. Our preliminary evaluation results indicate that multimodal inputs of TMSE enable users to formulate tempo related queries more easily in comparison with existing music search engines.