EGVE '02 Proceedings of the workshop on Virtual environments 2002
Speech and gesture multimodal control of a whole Earth 3D visualization environment
VISSYM '02 Proceedings of the symposium on Data Visualisation 2002
Segmenting Hands of Arbitrary Color
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Real Time Face and Object Tracking as a Component of a Perceptual User Interface
WACV '98 Proceedings of the 4th IEEE Workshop on Applications of Computer Vision (WACV'98)
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
ISWC '00 Proceedings of the 4th IEEE International Symposium on Wearable Computers
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation
Proceedings of the 5th international conference on Multimodal interfaces
A user interface framework for multimodal VR interactions
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
On the usability of gesture interfaces in virtual reality environments
CLIHC '05 Proceedings of the 2005 Latin American conference on Human-computer interaction
Robust real-time upper body limb detection and tracking
Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks
A survey of skin-color modeling and detection methods
Pattern Recognition
Adaptive learning of an accurate skin-color model
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
FlowMouse: a computer vision-based pointing and gesture input device
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
Hi-index | 0.00 |
One of the long-term goals in human-computer interaction is to utilize more intuitive and natural methods such as speech and hand gesture that a user would employ for communication. In this paper, we present a multi-modal 3D interaction mechanism, in which user can interact with a 3D model of a tourist location displayed on the kiosk screen from a one meter distance by means of gestures and voice commands without wearing any special device in a public place, with a complex and non-static background environment. The system can be used in many applications such entertainment, touring, education, museum displays, and advertising.