VoiceNotes: a speech interface for a hand-held voice notetaker
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
“GazeToTalk”: a nonverbal interface with meta-communication facility (Poster Session)
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
Interaction techniques for the analysis of complex data on high-resolution displays
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Enabling rapid development of multimodal data entry applications
OTM'07 Proceedings of the 2007 OTM confederated international conference on On the move to meaningful internet systems - Volume Part I
Hi-index | 0.00 |
Finding our way out at a large university campus is a problem. We developed VTQuest, http://sunfish.cs.vt.edu/VTQuestV, as a web-based software system to solve this problem for the campus of Virginia Tech (http://www.vt.edu/). VTQuest enables (a) multimodal interaction with voice, mouse, and keyboard, (b) browsing the campus map, (c) locating a building by name, abbreviation, category, or within a distance on the campus map, (d) locating a room on the floor plan of a building, and (e) obtaining walking directions from one building to another. VTQuest provides these capabilities for 103 buildings with floor plans for most of the buildings. VTQuest is engineered based on Java 2 Platform, Enterprise Edition (J2EE) using Scalable Vector Graphics (SVG) and Speech Application Language Tags (SALT). SVG enables zooming into the maps without losing image quality. The voice interface offers a variety of features including an extensive grammar and out-of-turn interaction.