Realtime Personal Positioning System for Wearable Computers
ISWC '99 Proceedings of the 3rd IEEE International Symposium on Wearable Computers
Recognizing User Context via Wearable Sensors
ISWC '00 Proceedings of the 4th IEEE International Symposium on Wearable Computers
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Learning and inferring transportation routines
Artificial Intelligence
A Linear Programming Approach to Max-Sum Problem: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Video-rate localization in multiple maps for wearable augmented reality
ISWC '08 Proceedings of the 2008 12th IEEE International Symposium on Wearable Computers
ISWC '09 Proceedings of the 2009 International Symposium on Wearable Computers
Multimedia Tools and Applications
Hi-index | 0.00 |
We present a method for visual classification of actions and events captured from an egocentric point of view. The method tackles the challenge of a moving camera by creating deformable graph models for classification of actions. Action models are learned from low resolution, roughly stabilized difference images acquired using a single monocular camera. In parallel, raw images from the camera are used to estimate the user's location using a visual Simultaneous Localization and Mapping (SLAM) system. Action-location priors, learned using a labeled set of locations, further aid action classification and bring events into context. We present results on a dataset collected within a cluttered environment, consisting of routine manipulations performed on objects without tags.