Learning a kernel function for classification with small training samples
ICML '06 Proceedings of the 23rd international conference on Machine learning
Computer aided detection via asymmetric cascade of sparse hyperplane classifiers
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Performance evaluation of local colour invariants
Computer Vision and Image Understanding
Image classification from small sample, with distance learning and feature selection
ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part II
What does classifying more than 10,000 image categories tell us?
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part V
A learning-based algorithm for geometric labeling of indoor images
TELE-INFO'06 Proceedings of the 5th WSEAS international conference on Telecommunications and informatics
Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing
Local and global scaling reduce hubs in space
The Journal of Machine Learning Research
Hi-index | 0.00 |
Object identification (OID) is specialized recognition where the category is known (e.g. cars) and the algorithm recognizes an object驴s exact identity (e.g. Bob驴s BMW). Two special challenges characterize OID. (1) Inter-class variation is often small (many cars look alike) and may be dwarfed by illumination or pose changes. (2) There may be many classes but few or just one positive "training" examples per class. Due to (1), a solution must locate possibly subtle object-specific salient features (a door handle) while avoiding distracting ones (a specular highlight). However, (2) rules out direct techniques of feature selection. We describe an online algorithm that takes one model image from a known category and builds an efficient "same" vs. "different" classification cascade by predicting the most discriminative feature set for that object. Our method not only estimates the saliency and scoring function for each candidate feature, but also models the dependency between features, building an ordered feature sequence unique to a specific model image, maximizing cumulative information content. Learned stopping thresholds make the classifier very efficient. To make this possible, category-specific characteristics are learned automatically in an off-line training procedure from labeled image pairs of the category, without prior knowledge about the category. Our method, using the same algorithm for both cars and faces, outperforms a wide variety of other methods.