Learning from hints in neural networks
Journal of Complexity
Approximation capabilities of multilayer feedforward networks
Neural Networks
Elements of information theory
Elements of information theory
Neural Computation
Fat-shattering and the learnability of real-valued functions
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
A Bayesian/information theoretic model of bias learning
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Discriminability-Based Transfer between Neural Networks
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Function learning from interpolation
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Robust learning aided by context
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Approximate algorithms for neural-Bayesian approaches
Theoretical Computer Science - Natural computing
Learning Intermediate Concepts
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning a kernel function for classification with small training samples
ICML '06 Proceedings of the 23rd international conference on Machine learning
Constructing informative priors using transfer learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning a meta-level prior for feature relevance from multiple related tasks
Proceedings of the 24th international conference on Machine learning
Learning to learn implicit queries from gaze patterns
Proceedings of the 25th international conference on Machine learning
Classification in Very High Dimensional Problems with Handfuls of Examples
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Transfer Learning for Bayesian Networks
IBERAMIA '08 Proceedings of the 11th Ibero-American conference on AI: Advances in Artificial Intelligence
Intra-document structural frequency features for semi-supervised domain adaptation
Proceedings of the 17th ACM conference on Information and knowledge management
Cross-Domain Knowledge Transfer Using Semi-supervised Classification
AI '08 Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Semi-Supervised Multi-Task Regression
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Hierarchical Bayesian domain adaptation
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
A model of inductive bias learning
Journal of Artificial Intelligence Research
Proceedings of the 18th ACM conference on Information and knowledge management
Multi-task learning for learning to rank in web search
Proceedings of the 18th ACM conference on Information and knowledge management
A multitask learning model for online pattern recognition
IEEE Transactions on Neural Networks
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Knowledge transfer based on feature representation mapping for text classification
Expert Systems with Applications: An International Journal
Radial Basis Function Network for Multitask Pattern Recognition
Neural Processing Letters
Multi-task learning to rank for web search
Pattern Recognition Letters
Transferring knowledge of activity recognition across sensor networks
Pervasive'10 Proceedings of the 8th international conference on Pervasive Computing
Multitask learning using regularized multiple kernel learning
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
Refining Regulatory Networks through Phylogenetic Transfer of Information
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Relational Feature Mining with Hierarchical Multitask kFOIL
Fundamenta Informaticae - Machine Learning in Bioinformatics
Transfer Learning from Unlabeled Data via Neural Networks
Neural Processing Letters
Multi-Task boosting by exploiting task relationships
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
A theory of transfer learning with applications to active learning
Machine Learning
Collaborative boosting for activity classification in microblogs
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Strategic cognitive sequencing: a computational cognitive neuroscience approach
Computational Intelligence and Neuroscience - Special issue on Neurocognitive Models of Sense Making
Learning high-order task relationships in multi-task learning
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
A Bayesian model of learning to learn by sampling from multiple tasksis presented. The multiple tasks are themselves generated by samplingfrom a distribution over an environment of related tasks. Such anenvironment is shown to be naturally modelled within a Bayesiancontext by the concept of an objective priordistribution. It is argued that for many common machine learning problems, although ingeneral we do not know the true (objective) prior for the problem, wedo have some idea of a set of possible priors to which the true priorbelongs. It is shown that under these circumstances a learner can useBayesian inference to learn the true prior by learning sufficientlymany tasks from the environment. In addition, bounds are given onthe amount of information required to learn a task when it issimultaneously learnt with several other tasks. The bounds show thatif the learner has little knowledge of the true prior, but thedimensionality of the true prior is small, then sampling multipletasks is highly advantageous. The theory is applied to the problem oflearning a common feature set or equivalently alow-dimensional-representation (LDR) for an environment of relatedtasks.