Watch what I do: programming by demonstration
Watch what I do: programming by demonstration
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Design for individuals, design for groups: tradeoffs between power and workspace awareness
CSCW '98 Proceedings of the 1998 ACM conference on Computer supported cooperative work
Ten myths of multimodal interaction
Communications of the ACM
Creating tangible interfaces by augmenting physical objects with multimodal language
Proceedings of the 6th international conference on Intelligent user interfaces
DiamondTouch: a multi-user touch technology
Proceedings of the 14th annual ACM symposium on User interface software and technology
Customizable physical interfaces for interacting with conventional applications
Proceedings of the 15th annual ACM symposium on User interface software and technology
Multimodal Interaction During Multiparty Dialogues: Initial Results
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Audio-visual cues distinguishing self- from system-directed speech in younger and older adults
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Evaluation of an on-line adaptive gesture interface with command prediction
GI '05 Proceedings of Graphics Interface 2005
Gesture Registration, Relaxation, and Reuse for Multi-Point Direct-Touch Surfaces
TABLETOP '06 Proceedings of the First IEEE International Workshop on Horizontal Interactive Human-Computer Systems
Proceedings of the working conference on Advanced visual interfaces
How pairs interact over a multimodal digital table
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Exploring true multi-user multimodal interaction over a digital table
Proceedings of the 7th ACM conference on Designing interactive systems
A Wizard of Oz study for an AR multimodal interface
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Speak up your mind: using speech to capture innovative ideas on interactive surfaces
Proceedings of the 10th Brazilian Symposium on on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction
Hi-index | 0.00 |
Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gestures. In previous papers, we illustrated multi-user gesture and speech interaction on a digital table for geospatial applications -- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration -- instead of programming -- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying "Computer, when I do [one finger gesture], you do [mouse drag]". Similarly, discrete speech commands can be trained by saying "Computer, when I say [layer bars], you do [keyboard and mouse macro]". The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system.