Artificial minds
LISTEN: sounding uncertainty visualization
Proceedings of the 7th conference on Visualization '96
Sound and computer information presentation
Sound and computer information presentation
Algorithms for aural representation and presentation of quantitative data to complement and enhance data visualization
Artificial Life Models in Software
Artificial Life Models in Software
Hyper-shaku (Border-Crossing): towards the multi-modal gesture-controlled hyper-instrument
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Microsound
Hi-index | 0.00 |
This paper describes the performance, mapping, transformation and representation phases of a model for gesture-triggered musical creativity. These phases are articulated in an example creative environment, Hyper-Shaku (Border-Crossing), an audio-visually augmented shakuhachi performance to demonstrate the adaptive, empathetic response of the generative systems. The shakuhachi is a Japanese traditional end-blown bamboo Zen flute. Its 5 holes and simple construction require subtle and complex gestural movements to produce its diverse range of pitches, vibrato and pitch inflections, making it an ideal candidate for gesture capture. The environment uses computer vision, gesture sensors and computer listening to process and generate electronic music and visualization in real time response to the live performer. The integration of looming auditory motion and Neural Oscillator Network (NOSC) generative modules are implemented in this example.