“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Information Visualization: Perception for Design
Information Visualization: Perception for Design
A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Journal of Cognitive Neuroscience
Speech-activated user interfaces and climbing Mt. Exascale
Communications of the ACM - One Laptop Per Child: Vision vs. Reality
Space to think: large high-resolution displays for sensemaking
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
The paper presents the concept, implementation, and a feasibility study of a user interface technique, named VAVS ("voice-assisted visual search"). VAVS employs user's voice input for assisting the user in searching for objects of interest in complex displays. User voice input is compared with attributes of visually presented objects and, if there is a match, the matching object is highlighted to help the user visually locate the object. The paper discusses differences between, on the one hand, VAVS and, on the other hand, voice commands and multimodal input techniques. An interactive prototype implementing the VAVS concept and employing a standard voice recognition program is described. The paper reports an empirical study, in which an object location task was carried out with and without VAVS. It was found that the VAVS condition was associated with higher performance and use satisfaction. The paper concludes with a discussion of directions for future work.