“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
ACM Computing Surveys (CSUR)
Natural language with discrete speech as a mode for human-to-machine
Communications of the ACM
A framework for choosing a database query language
ACM Computing Surveys (CSUR)
Virtual environment display system
I3D '86 Proceedings of the 1986 workshop on Interactive 3D graphics
Manipulating simulated objects with real-world gestures using a force and position sensitive screen
SIGGRAPH '84 Proceedings of the 11th annual conference on Computer graphics and interactive techniques
Multi-modal Interface in Multi-Display Environment for Multi-users
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Hi-index | 0.02 |
“Put That There” is a voice and gesture interactive system implemented at the Architecture Machine Group at MIT. It allows a user to build and modify a graphical database on a large format video display. The goal of the research is a simple, conversational interface to sophisticated computer interaction. Natural language and gestures are used, while speech output allows the system to query the user on ambiguous input. This project starts from the assumption that speech recognition hardware will never be 100% accurate, and explores other techniques to increase the usefulness (i.e., the “effective accuracy”) of such a system. These include: redundant input channels, syntactic and semantic analysis, and context-sensitive interpretation. In addition, we argue that recognition errors will be more tolerable if they are evident sooner through feedback and easily corrected by voice.