Machine listening: acoustic interface with ART

  • Authors:
  • Benjamin Smith;Guy Garnett

  • Affiliations:
  • University of Illinois at Urbana-Champaign, Urbana, Illinois, United States;University of Illinois at Urbana-Champaign, Urbana, Illinois, United States

  • Venue:
  • Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent developments in machine listening present opportunities for innovative new paradigms for computer-human interaction. Voice recognition systems demonstrate a typical approach that conforms to event oriented control models. However, acoustic sound is continuous, and highly dimensional, presenting a rich medium for computer interaction. Unsupervised machine learning models present great potential for real-time machine listening and understanding of audio and sound data. We propose a method for harnessing unsupervised machine learning algorithms, Adaptive Resonance Theory specifically, in order to inform machine listening, build musical context information, and drive real-time interactive performance systems. We present the design and evaluation of this model leveraging the expertise of trained, improvising musicians.