Machine Learning - Special issue on inductive transfer
Methods and metrics for cold-start recommendations
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Learning from one example in machine vision by sharing probability densities
Learning from one example in machine vision by sharing probability densities
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Large Margin Methods for Structured and Interdependent Output Variables
The Journal of Machine Learning Research
Self-taught learning: transfer learning from unlabeled data
Proceedings of the 24th international conference on Machine learning
Humans perform semi-supervised classification too
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Semi-Supervised Learning
Bias learning, knowledge sharing
IEEE Transactions on Neural Networks
Large margin transductive transfer learning
Proceedings of the 18th ACM conference on Information and knowledge management
Metric learning for large scale image classification: generalizing to new classes at near-zero cost
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Learning compact class codes for fast inference in large multi class classification
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Learning attribute relation in attribute-based zero-shot classification
IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
NuActiv: recognizing unseen new activities using semantic attribute-based learning
Proceeding of the 11th annual international conference on Mobile systems, applications, and services
Hi-index | 0.00 |
We introduce the problem of zero-data learning, where a model must generalize to classes or tasks for which no training data are available and only a description of the classes or tasks are provided. Zero-data learning is useful for problems where the set of classes to distinguish or tasks to solve is very large and is not entirely covered by the training data. The main contributions of this work lie in the presentation of a general formalization of zero-data learning, in an experimental analysis of its properties and in empirical evidence showing that generalization is possible and significant in this context. The experimental work of this paper addresses two classification problems of character recognition and a multitask ranking problem in the context of drug discovery. Finally, we conclude by discussing how this new framework could lead to a novel perspective on how to extend machine learning towards AI, where an agent can be given a specification for a learning problem before attempting to solve it (with very few or even zero examples).