Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
AudioCubes: a distributed cube tangible interface based on interaction range for sound design
Proceedings of the 2nd international conference on Tangible and embedded interaction
Sonic interaction design: sound, information and experience
CHI '08 Extended Abstracts on Human Factors in Computing Systems
Comparing Gesture and Touch for Notification System Interactions
ACHI '09 Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions
International Journal of Human-Computer Studies
Dynamic Social Interaction in a Collective Mobile Music Performance
CSE '09 Proceedings of the 2009 International Conference on Computational Science and Engineering - Volume 04
Perceived physicality in audio-enhanced force input
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Hi-index | 0.00 |
Dynamic audio feedback enriches the interaction with a mobile device. Novel sensor technologies and audio synthesis tools provide infinite number of possibilities to design the interaction between the sensory input and audio output. This paper presents a study where vocal sketching was used as prototype method to grasp ideas and expectations in early stages of designing multimodal interaction. We introduce an experiment where a graspable mobile device was given to the participants and urged to sketch vocally the sounds to be produced when using the device in a communication and musical expression scenarios. The sensory input methods were limited to gestures such as touch, squeeze and movements. Vocal sketching let us to examine closer how gesture and sound could be coupled in the use of our prototype device, such as moving the device upwards with elevating pitch. The results reported in this paper have already informed our opinions and expectations towards the actual design phase of the audio modality.