Undoing the damage of dataset bias
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Discovering latent domains for multisource domain adaptation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Diagnosing error in object detectors
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Connecting missing links: object discovery from sparse observations using 5 million product images
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
No bias left behind: covariate shift adaptation for discriminative 3d pose estimation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Beyond dataset bias: multi-task unaligned shared knowledge transfer
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Cross-Database transfer learning via learnable and discriminant error-correcting output codes
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Undo the codebook bias by linear transformation for visual applications
Proceedings of the 21st ACM international conference on Multimedia
Eye pupil localization with an ensemble of randomized trees
Pattern Recognition
Visual word spatial arrangement for image retrieval and classification
Pattern Recognition
Face recognition for web-scale datasets
Computer Vision and Image Understanding
Journal of Visual Communication and Image Representation
Car detection in sequences of images of urban environments using mixture of deformable part models
Pattern Recognition Letters
Autonomously learning to visually detect where manipulation will succeed
Autonomous Robots
Image Classification with the Fisher Vector: Theory and Practice
International Journal of Computer Vision
C2TAM: A Cloud framework for cooperative tracking and mapping
Robotics and Autonomous Systems
Hi-index | 0.00 |
Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.