Visual Recognition Using Local Appearance
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume I - Volume I
Vision for man machine interaction
Proceedings of the IFIP TC2/WG2.7 Working Conference on Engineering for Human-Computer Interaction
Finger Tracking for the Digital Desk
AUIC '00 Proceedings of the First Australasian User Interface Conference
A Compression Framework for Content Analysis
CBAIVL '99 Proceedings of the IEEE Workshop on Content-Based Access of Image and Video Libraries
Robust analysis of feature spaces: color image segmentation
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Color-Based Hands Tracking System for Sign Language Recognition
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
A Virtual 3D Blackboard: 3D Finger Tracking Using a Single Camera
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Segmenting Hands of Arbitrary Color
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Robust Finger Tracking with Multiple Cameras
RATFG-RTS '99 Proceedings of the International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems
Smart Sight: A Tourist Assistant System
ISWC '99 Proceedings of the 3rd IEEE International Symposium on Wearable Computers
Robust finger tracking for wearable computer interfacing
Proceedings of the 2001 workshop on Perceptive user interfaces
How smart are our environments? An updated look at the state of the art
Pervasive and Mobile Computing
Hi-index | 0.00 |
This paper provides an overview of a multi-modal wearable computersystem, SNAP&TELL. The system performs real-time gesturetracking, combined with audio-based control commands, in order torecognize objects in an environment, including outdoor landmarks.The system uses a single camera to capture images, which are thenprocessed to perform color segmentation, fingertip shape analysis,robust tracking, and invariant object recognition, in order toquickly identify the objects encircled and SNAPped by the userspointing gesture. In addition, the system returns an audionarration, TELLing the user information concerning the objectsclassification, historical facts, usage, etc. This system providesenabling technology for the design of intelligent assistants tosupport Web-On-The-World applications, with potential uses such astravel assistance, business advertisement, the design of smartliving and working spaces, and pervasive wireless services andinternet vehicles.