The algebraic eigenvalue problem
The algebraic eigenvalue problem
Shape and motion from image streams under orthography: a factorization method
International Journal of Computer Vision
Face recognition: the problem of compensating for changes in illumination direction
ECCV '94 Proceedings of the third European conference on Computer vision (vol. 1)
Illumination Planning for Object Recognition Using Parametric Eigenspaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Visual learning and recognition of 3-D objects from appearance
International Journal of Computer Vision
A Paraperspective Factorization Method for Shape and Motion Recovery
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Sequential Factorization Method for Recovering Shape and Motion From Image Streams
IEEE Transactions on Pattern Analysis and Machine Intelligence
What Is the Set of Images of an Object Under All Possible Illumination Conditions?
International Journal of Computer Vision
Affine Structure and Motion from Points, Lines and Conics
International Journal of Computer Vision
Dealing with noise in multiframe structure from motion
Computer Vision and Image Understanding
International Journal of Computer Vision
Heteroscedastic Regression in Computer Vision: Problems with Bilinear Constraint
International Journal of Computer Vision - Special issue on a special section on visual surveillance
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiview Constraints on Homographies
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Geometry of Multiple Images: The Laws That Govern The Formation of Images of A Scene and Some of Their Applications
Multi-Frame Correspondence Estimation Using Subspace Constraints
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lambertian Reflectance and Linear Subspaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Rank 4 Constraint in Multiple (=3) View Geometry
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume II - Volume II
Illumination Cones for Recognition under Variable Lighting: Faces
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Multi-View Subspace Constraints on Homographies
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Rank Conditions on the Multiple-View Matrix
International Journal of Computer Vision
Recovering the missing components in a large noisy low-rank matrix: application to SFM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rank Constraints for Homographies over Two Views: Revisiting the Rank Four Constraint
International Journal of Computer Vision
Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique
Journal of Mathematical Imaging and Vision
Hi-index | 0.00 |
Linear subspace analysis (LSA) has become rather ubiquitous in a wide range of problems arising in pattern recognition and computer vision. The essence of these approaches is that certain structures are intrinsically (or approximately) low dimensional: for example, the factorization approach to the problem of structure from motion (SFM) and principal component analysis (PCA) based approach to face recognition. In LSA, the singular value decomposition (SVD) is usually the basic mathematical tool. However, analysis of the performance, in the presence of noise, has been lacking. We present such an analysis here. First, the "denoising capacity" of the SVD is analysed. Specifically, given a rank-r matrix, corrupted by noise--how much noise remains in the rank-r projected version of that corrupted matrix? Second, we study the "learning capacity" of the LSA-based recognition system in a noise-corrupted environment. Specifically, LSA systems that attempt to capture a data class as belonging to a rank-r column space will be affected by noise in both the training samples (measurement noise will mean the learning samples will not produce the "true subspace") and the test sample (which will also have measurement noise on top of the ideal clean sample belonging to the "true subspace"). These results should help one to predict aspects of performance and to design more optimal systems in computer vision, particularly in tasks, such as SFM and face recognition. Our analysis agrees with certain observed phenomenon, and these observations, together with our simulations, verify the correctness of our theory.