Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
EyePoint: practical pointing and selection using gaze and keyboard
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 13th international conference on Intelligent user interfaces
Natural activation for gesture recognition systems
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Gaze guided object recognition using a head-mounted eye tracker
Proceedings of the Symposium on Eye Tracking Research and Applications
Recognizing Words in Scenes with a Head-Mounted Eye-Tracker
DAS '12 Proceedings of the 2012 10th IAPR International Workshop on Document Analysis Systems
Dynamic text management for see-through wearable and heads-up display systems
Proceedings of the 2013 international conference on Intelligent user interfaces
On-Body IE: A Head-Mounted Multimodal Augmented Reality System for Learning and Recalling Faces
IE '13 Proceedings of the 2013 9th International Conference on Intelligent Environments
An Anytime Algorithm for Camera-Based Character Recognition
ICDAR '13 Proceedings of the 2013 12th International Conference on Document Analysis and Recognition
Hi-index | 0.00 |
Efficient text recognition has recently been a challenge for augmented reality systems. In this paper, we propose a system with the ability to provide translations to the user in real-time. We use eye gaze for more intuitive and efficient input for ubiquitous text reading and translation in head mounted displays (HMDs). The eyes can be used to indicate regions of interest in text documents and activate optical-character-recognition (OCR) and translation functions. Visual feedback and navigation help in the interaction process, and text snippets with translations from Japanese to English text snippets, are presented in a see-through HMD. We focus on travelers who go to Japan and need to read signs and propose two different gaze gestures for activating the OCR text reading and translation function. We evaluate which type of gesture suits our OCR scenario best. We also show that our gaze-based OCR method on the extracted gaze regions provide faster access times to information than traditional OCR approaches. Other benefits include that visual feedback of the extracted text region can be given in real-time, the Japanese to English translation can be presented in real-time, and the augmentation of the synchronized and calibrated HMD in this mixed reality application are presented at exact locations in the augmented user view to allow for dynamic text translation management in head-up display systems.