Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
PhoneGuide: museum guidance supported by on-device object recognition on mobile phones
MUM '05 Proceedings of the 4th international conference on Mobile and ubiquitous multimedia
Vision-based motion estimation for interaction with mobile devices
Computer Vision and Image Understanding
Image alignment and stitching: a tutorial
Foundations and Trends® in Computer Graphics and Vision
Feature Tracking for Mobile Augmented Reality Using Video Coder Motion Vectors
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Efficient Extraction of Robust Image Features on Mobile Devices
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
MARCH: mobile augmented reality for cultural heritage
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Region categorization with mobile applications
Proceedings of the international conference on Multimedia
Speeding up mobile multimedia applications
Proceedings of the international conference on Multimedia
Designing real-time multimedia applications on mobile devices
MobiHeld '11 Proceedings of the 3rd ACM SOSP Workshop on Networking, Systems, and Applications on Mobile Handhelds
Hi-index | 0.00 |
Robust local features such as SIFT and SURF have been applied to many interesting image matching applications. These features are by nature very computational intensive even for modern desktop PCs. We have developed a framework for efficient feature extraction and matching for still images on a mobile device. In this paper we extend the still-image framework to video sequences. It is inefficient to perform feature extraction and matching for each frame in the video sequence. By tracking the content of the frames, feature extraction and image matching need only be performed when there is new content. We show promising experimental results using this approach.