A design space for multimodal systems: concurrent processing and data fusion
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Support for input adaptability in the ICON toolkit
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
Human Machine Interaction
ACM Transactions on Computer-Human Interaction (TOCHI)
Comparing NiMMiT and data-driven notations for describing multimodal interaction
TAMODIA'06 Proceedings of the 5th international conference on Task models and diagrams for users interface design
Hi-index | 0.00 |
Choosing an appropriate toolkit for creating a multimodal interface is a cumbersome task. Several specialized toolkits include fusion and fission engines that allow developers to combine and decompose modalities to capture multimodal input and provide multimodal output. Unfortunately, the extent to which these toolkits can facilitate the creation of a multimodal interface is hard or impossible to estimate, due to the absence of a scale where the toolkit's capabilities can be measured on. In this paper, we propose a measurement scale, which allows the assessment of specialized toolkits without need for time-consuming testing or source code analysis. This scale is used to measure and compare the capabilities of three toolkits: CoGenIVE, HephaisTK and ICon.