All vehicles are cars: subclass preferences in container concepts
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Script data for attribute-based recognition of composite activities
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Metric learning for large scale image classification: generalizing to new classes at near-zero cost
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
What makes a good detector? --- structured priors for learning from few examples
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part V
Learning attribute relation in attribute-based zero-shot classification
IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
Enhanced representation and multi-task learning for image annotation
Computer Vision and Image Understanding
Semi-Supervised learning on a budget: scaling up to large datasets
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part I
Knives are picked before slices are cut: recognition through activity sequence analysis
Proceedings of the 5th international workshop on Multimedia for cooking & eating activities
Hi-index | 0.00 |
While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study.