Activity and Location Recognition Using Wearable Sensors
IEEE Pervasive Computing
Head movement estimation for wearable eye tracker
Proceedings of the 2004 symposium on Eye tracking research & applications
Deploying and evaluating a location-aware system
Proceedings of the 3rd international conference on Mobile systems, applications, and services
On the choice and placement of wearable vision sensors
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
A 3D pose estimator for the visually impaired
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Hi-index | 0.00 |
In this paper, we present a framework for a navigation system in an indoor environment using only omnidirectional video. Within a Bayesian framework we seek the appropriate place and image from the training data to describe what we currently see and infer a location. The posterior distribution over the state space conditioned on image similarity is typically not Gaussian. The distribution is represented using sampling and the location is predicted and verified over time using the Condensation algorithm. The system does not require complicated feature detection, but uses a simple metric between two images. Even with low-resolution input, the system may achieve accurate results with respect to the training data when given favorable initial conditions.