Shape from shading
Modelling object appearance using the grey-level surface
BMVC 94 Proceedings of the conference on British machine vision (vol. 2)
Linear Object Classes and Image Synthesis From a Single Example Image
IEEE Transactions on Pattern Analysis and Machine Intelligence
Model-Based Brightness Constraints: On Direct Estimation of Structure and Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized Image Matching: Statistical Learning of Physically-Based Deformations
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
What is the set of images of an object under all possible lighting conditions?
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Texture Synthesis by Non-Parametric Sampling
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Comparing image-based localization methods
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
We present a method for learning a set of generative models which are suitable for representing selected image-domain features of a scene as a function of changes in the camera viewpoint. Such models are important for robotic tasks, such as probabilistic position estimation (i.e. localization), as well as visualization. Our approach entails the automatic selection of the features, as well as the synthesis of models of their visual behavior. The model we propose is capable of generating maximum-likelihood views, as well as a measure of the likelihood of a particular view from a particular camera position. Training the models involves regularizing observations of the features from known camera locations. The uncertainty of the model is evaluated using cross validation, which allows for a priori evaluation of features and their attributes. The features themselves are initially selected as salient points by a measure of visual attention, and are tracked across multiple views. While the motivation for this work is for robot localization, the results have implications for image interpolation, image-based scene reconstruction and object recognition. This paper presents a formulation of the problem and illustrative experimental results.