Multimodal interfaces for dynamic interactive maps
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing a human-centered, multimodal GIS interface to support emergency management
Proceedings of the 10th ACM international symposium on Advances in geographic information systems
Toward Natural Gesture/Speech Control of a Large Display
EHCI '01 Proceedings of the 8th IFIP International Conference on Engineering for Human-Computer Interaction
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Experimental evaluation of vision and speech based multimodal interfaces
Proceedings of the 2001 workshop on Perceptive user interfaces
Designing a human-centered, multimodal GIS interface to support emergency management
Proceedings of the 10th ACM international symposium on Advances in geographic information systems
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A framework for rapid development of multimodal interfaces
Proceedings of the 5th international conference on Multimodal interfaces
Untethered gesture acquisition and recognition for a multimodal conversational system
Proceedings of the 5th international conference on Multimodal interfaces
Exploiting prosodic structuring of coverbal gesticulation
Proceedings of the 6th international conference on Multimodal interfaces
ICARE software components for rapidly developing multimodal interfaces
Proceedings of the 6th international conference on Multimodal interfaces
Real-time 3D finger pointing for an augmented desk
AUIC '05 Proceedings of the Sixth Australasian conference on User interface - Volume 40
Spatial ontology for semantic integration in 3D multimodal interaction framework
Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications
AFRIGRAPH '07 Proceedings of the 5th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
Extensible middleware framework for multimodal interfaces in distributed environments
Proceedings of the 9th international conference on Multimodal interfaces
Proceedings of the 2nd international conference on Tangible and embedded interaction
An integrative recognition method for speech and gestures
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
Human Machine Interaction
MEMODULES as Tangible Shortcuts to Multimedia Information
Human Machine Interaction
Fusion engines for multimodal input: a survey
Proceedings of the 2009 international conference on Multimodal interfaces
Benchmarking fusion engines of multimodal interactive systems
Proceedings of the 2009 international conference on Multimodal interfaces
Proceedings of the 2009 international conference on Multimodal interfaces
Vision-based hand-gesture applications
Communications of the ACM
Enabling the mobile web through auto-generating multimodal web pages
CIMMACS'05 Proceedings of the 4th WSEAS international conference on Computational intelligence, man-machine systems and cybernetics
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: interaction techniques and environments - Volume Part II
Tracking body parts of multiple people for multi-person multimodal interface
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Hi-index | 0.02 |
This paper presents a framework for designing a natural multimodal human computer interaction (HCI) system. The core of the proposed framework is a principled method for combining information derived from audio and visual cues. To achieve natural interaction, both audio and visual modalities are fused along with feedback through a large screen display. Careful design along with due considerations of possible aspects of a systems interaction cycle and integration has resulted in a successful system. The performance of the proposed framework has been validated through the development of several prototype systems as well as commercial applications for the retail and entertainment industry. To assess the impact of these multimodal systems (MMS), informal studies have been conducted. It was found that the system performed according to its specifications in 95% of the cases and that users showed ad-hoc proficiency, indicating natural acceptance of such systems.