The active badge location system
ACM Transactions on Information Systems (TOIS)
Computer Vision
First Person Indoor/Outdoor Augmented Reality Application: ARQuake
Personal and Ubiquitous Computing
TRIP: A Low-Cost Vision-Based Location System for Ubiquitous Computing
Personal and Ubiquitous Computing
Location based applications for mobile augmented reality
AUIC '03 Proceedings of the Fourth Australasian user interface conference on User interfaces 2003 - Volume 18
Realtime Personal Positioning System for Wearable Computers
ISWC '99 Proceedings of the 3rd IEEE International Symposium on Wearable Computers
ISWC '01 Proceedings of the 5th IEEE International Symposium on Wearable Computers
Context-based vision system for place and object recognition
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hybrid approach to efficient text extraction in complex color images
Pattern Recognition Letters
Integrated Head and Hand Tracking for Indoor and Outdoor Augmented Reality
VR '04 Proceedings of the IEEE Virtual Reality 2004
Hi-index | 0.00 |
In an augmented game is a game, which is overlapping virtual objects on a real environment and attacking the virtual objects, accurate location estimation in a real environment is one of important issues. Existing global positioning systems(GPS) to track users' positions do not work inside a building, and systems using sensors such as Active Badge are expensive to install and maintain. Therefore, researches for low-cost vision-based navigation system have been attempted. Since most of scenes include a floor, ceiling and wall in a building, it is difficult to represent characteristics of those scenes. We propose an image matching method using image texts instead of objects included uniformly in natural scenes for navigation. The image texts are widely distributed in our environments, are very useful for describing the contents of an image, and can be sassily extracted compared to other semantic contents, and we obtain image texts using a method combining edge density and multi-layer perceptrons with CAMShift. However, since a camera attached to moving vehicles(robots) or hand-held devices has a low resolution, it is not easy to perform extraction using a binarization and a text recognition. Therefore, we perform an image matching using a matching window based on a scale and orientation of image texts and its neighborhood to recognize discriminated places including same image texts.