A framework for low level feature extraction
ECCV '94 Proceedings of the third European conference on Computer Vision (Vol. II)
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluation of Interest Point Detectors
International Journal of Computer Vision - Special issue on a special section on visual surveillance
Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Saliency, Scale and Image Description
International Journal of Computer Vision
Comparing salient point detectors
Pattern Recognition Letters
An Affine Invariant Interest Point Detector
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
What Is the Role of Independence for Visual Recognition?
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Hi-index | 0.01 |
We present an experimental comparison of the performance of representative saliency detectors from three guiding principles for the detection of salient image locations: locations of maximum stability with respect to image transformations, locations of greatest image complexity, and most discriminant locations. It is shown that discriminant saliency performs better in terms of 1) capturing relevant information for classification, 2) being more robust to image clutter, and 3) exhibiting greater stability to image transformations associated with variations of 3D object pose. We then investigate the dependence of discriminant saliency on the underlying set of candidate discriminant features, by comparing the performance achieved with three popular feature sets: the discrete cosine transform, a Gabor, and a Haar wavelet decomposition. It is show that, even though different feature sets produce equivalent results, there may be advantages in considering features explicitly learned from examples of the image classes of interest.