Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
SATIN: a toolkit for informal ink-based applications
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
A model-based approach for real-time embedded multimodal systems in military aircrafts
Proceedings of the 6th international conference on Multimodal interfaces
Hi-index | 0.00 |
Designing and implementing applications that can handle multiple recognition-based interaction technologies such as speech and gesture inputs is a difficult task. IMBuilder and MEngine are the two components of a new toolkit for rapidly creating and testing multimodal interface designs. First, an interaction model is specified in the form of a collection of finite state machines, using a simple graphical tool (IMBuilder). Then, this interaction model can be tested in a multimodal framework (MEngine) that automatically performs input recognition (speech and gesture) and modality integration. Developers can build complete multimodal applications without concerning themselves with the recognition engine internals and modality integration. Furthermore, several interaction models can be rapidly tested in order to achieve the best use and combination of input modalities with minimal implementation effort.