The automatic recognition of gestures
The automatic recognition of gestures
A design space for multimodal systems: concurrent processing and data fusion
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Speech recognition in noisy environments: a survey
Speech Communication
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Networked virtual environments: design and implementation
Networked virtual environments: design and implementation
Something from nothing: augmenting a paper-based work practice via multimodal interaction
DARE '00 Proceedings of DARE 2000 on Designing augmented reality environments
An open software architecture for virtual reality interaction
VRST '01 Proceedings of the ACM symposium on Virtual reality software and technology
VRPN: a device-independent, network-transparent VR peripheral system
VRST '01 Proceedings of the ACM symposium on Virtual reality software and technology
Multimodal Interaction for 2D and 3D Environments
IEEE Computer Graphics and Applications
Designing the user interface for pen and speech multimedia applications
CHI '99 Extended Abstracts on Human Factors in Computing Systems
Cooperation between Reactive 3D Objects and a Multimodal X Window Kernel for CAD
Multimodal Human-Computer Communication, Systems, Techniques, and Experiments
A Framework for Fast and Accurate Collision Detection for Haptic Interaction
VR '99 Proceedings of the IEEE Virtual Reality
Polyvalent Display Framework to Control Virtual Navigations by 6DOF Tracking
VR '02 Proceedings of the IEEE Virtual Reality Conference 2002
Modality fusion for graphic design applications
Proceedings of the 6th international conference on Multimodal interfaces
DNA in Virtuo visualization and exploration of 3D genomic structures
AFRIGRAPH '04 Proceedings of the 3rd international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
A user interface framework for multimodal VR interactions
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Early versus late fusion in semantic video analysis
Proceedings of the 13th annual ACM international conference on Multimedia
Cluster-based solution for virtual and augmented reality applications
GRAPHITE '05 Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia
History based reactive objects for immersive CAD
SM '04 Proceedings of the ninth ACM symposium on Solid modeling and applications
AFRIGRAPH '07 Proceedings of the 5th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa
A framework for designing adaptative systems in VR applications
CCNC'09 Proceedings of the 6th IEEE Conference on Consumer Communications and Networking Conference
Insights on the design of intml
Presence: Teleoperators and Virtual Environments
SACARI: an immersive remote driving interface for autonomous vehicles
ICCS'05 Proceedings of the 5th international conference on Computational Science - Volume Part II
Hi-index | 0.00 |
This paper describes the EVI3d framework, a distributed architecture developed to enhance interactions within Virtual Environments (VE). This framework manages many multi-sensorial devices such as trackers, data gloves, and speech or gesture recognition systems as well as haptic devices. The structure of this architecture allows a complete dispatching of device services and their clients on as many machines as required. With the dated events provided by its time synchronization system, it becomes possible to design a specific module to manage multimodal fusion processes. To this end, we describe how the EVI3d framework manages not only low-level events but also abstract modalities. Moreover, the data flow service of the EVI3d framework solves the problem of sharing the virtual scene between modality modules.