Flocks, herds and schools: A distributed behavioral model
SIGGRAPH '87 Proceedings of the 14th annual conference on Computer graphics and interactive techniques
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
A personal chronology of audiovisual systems research
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Using music to interact with a virtual character
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Transmodal feedback as a new perspective for audio-visual effects
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Voicedraw: a hands-free voice-driven drawing application for people with motor impairments
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
Sonigraphite: drawing sounds as new physical expression
ACM SIGGRAPH 2008 posters
Onomato planets: physical computing of Japanese onomatopoeia
Proceedings of the 3rd International Conference on Tangible and Embedded Interaction
Voice art: a novel mode for creating visual art
Proceedings of the 20th Australasian Conference on Computer-Human Interaction: Designing for Habitus and Habitat
The Glass Organ: Musical Instrument Augmentation for Enhanced Transparency
SG '09 Proceedings of the 10th International Symposium on Smart Graphics
Creating dream.Medusa to Encourage Dialogue in Performance
SG '09 Proceedings of the 10th International Symposium on Smart Graphics
Vote and Be Heard: Adding Back-Channel Signals to Social Mirrors
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I
Seeing more: visualizing audio cues
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction - Volume Part II
Interaction with a virtual character through performance based animation
SG'10 Proceedings of the 10th international conference on Smart graphics
The music pattern: A creative tabletop music creation platform
Computers in Entertainment (CIE) - Special Issue: Advances in Computer Entertainment Technology
CHANTI: predictive text entry using non-verbal vocal input
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Visualizing emotion in musical performance using a virtual character
SG'05 Proceedings of the 5th international conference on Smart Graphics
Ambient pre-communication: a study of voice volume control method on telecommunication
Ambient Intelligence in Everyday Life
Phonetic shapes: an interactive, sonic guest book
CHI '12 Extended Abstracts on Human Factors in Computing Systems
An unfinished drama: designing participation for the theatrical dance performance Parcival XX-XI
Proceedings of the Designing Interactive Systems Conference
TempoString: a tangible tool for children's music creation
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Organ Augmented Reality: Audio-Graphical Augmentation of a Classical Instrument
International Journal of Creative Interfaces and Computer Graphics
Hi-index | 0.00 |
Although we can sense someone's vocalizations with our ears, nose, and haptic sense, speech is invisible to us without the help of technical aids. In this paper, we present three interactive artworks which explore the question: "if we could see our speech, what might it look like?" The artworks we present are concerned with the aesthetic implications of making the human voice visible, and were created with a particular emphasis on interaction designs that support the perception of tight spatio-temporal relationships between sound, image, and the body. We coin the term in-situ speech visualization to describe a variety of augmented-reality techniques by which graphic representations of speech can be made to appear coincident with their apparent point of origination.