Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Plan-based integration of natural language and graphics generation
Artificial Intelligence - Special volume on natural language processing
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Experimental evaluation of vision and speech based multimodal interfaces
Proceedings of the 2001 workshop on Perceptive user interfaces
Expected, sensed, and desired: A framework for designing sensing-based interaction
ACM Transactions on Computer-Human Interaction (TOCHI)
Towards a Multidimensional Approach for the Evaluation of Multimodal Application User Interfaces
Proceedings of the 13th International Conference on Human-Computer Interaction. Part II: Novel Interaction Methods and Techniques
Hi-index | 0.00 |
The design of applications with multimodal interfaces currently implies complex handcrafting by interface experts, lack of compliance with industry standards of cost effectiveness, maintenance and user focus such as those achieved by the current User-Centered Design methods. This paper presents an initial step towards a design by-example approach, whereby the end-user's multimodal preferences for a specific domain can be learned during the design phase. In particular, we propose to alleviate the design costs by using tangible objects for designing multimodal user interfaces. Heuristic evaluation shows small to no effect on the user's preferred multimodal behaviour when comparing tangible and virtual objects during design.