A multimodal framework for music inputs (poster session)

  • Authors:
  • Goffredo Haus;Emanuele Pollastri

  • Affiliations:
  • L.I.M.-Laboratorio di Informatica Musicale, Department of Computer Science, State University of Milan, via Comelico, 39/ 1-20135 Milan (Italy);L.I.M.-Laboratorio di Informatica Musicale, Department of Computer Science, State University of Milan, via Comelico, 39/ 1-20135 Milan (Italy)

  • Venue:
  • MULTIMEDIA '00 Proceedings of the eighth ACM international conference on Multimedia
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The growth of digital music databases imposes new content-based methods of interfacing with stored data; although indexing and retrieval techniques are deeply investigated, an integrated view of querying mechanism has never been established before. Moreover, the multimodal nature of music should be exploited to match the users' expectations as well as their skills. In this paper, we propose a hierarchy of music-interfaces that is suitable for existent prototypes of music information retrieval systems; according to this framework, human/computer interaction should be improved by singing, playing or notating music. Dealing with multiple inputs poses many challenging problems for both their combination and the low-level translation needed to transform an acoustic signal into a symbolic representation. This paper addresses the latter problem in some details, aiming to develop music-interfaces available not only to trained-musician.