Robust recognition using eigenimages
Computer Vision and Image Understanding - Special issue on robusst statistical techniques in image understanding
MLESAC: a new robust estimator with application to estimating image geometry
Computer Vision and Image Understanding - Special issue on robusst statistical techniques in image understanding
A Framework for Robust Subspace Learning
International Journal of Computer Vision - Special Issue on Computational Vision at Brown University
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimation of Nonlinear Errors-in-Variables Models for Computer Vision Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
The challenge problem for automated detection of 101 semantic concepts in multimedia
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Multi-label linear discriminant analysis
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Efficient Additive Kernels via Explicit Feature Maps
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Least-Squares Framework for Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Chebyshev approximations to the histogram χ2 kernel
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Multi-task low-rank affinity pursuit for image segmentation
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Unwrapping low-rank textures on generalized cylindrical surfaces
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. Regression methods typically map image features (X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing regression methods is that samples are directly projected onto a subspace and hence fail to account for outliers which are common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that in existing regression methods, and discriminative methods in general, the regressor variables X are assumed to be noise free. Due to this assumption, discriminative methods experience significant degrades in performance when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of Robust Regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, multi-label classification and head pose estimation from images. Several synthetic and real world examples are used to illustrate the benefits of RR.