Artificial Neural Networks for Document Analysis and Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mindful: A framework for Meta-INDuctive neuro-FUzzy Learning
Information Sciences: an International Journal
Multitask Learning with Data Editing
IWINAC '07 Proceedings of the 2nd international work-conference on The Interplay Between Natural and Artificial Computation, Part I: Bio-inspired Modeling of Cognitive Tasks
Zero-data learning of new tasks
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
A multitask learning model for online pattern recognition
IEEE Transactions on Neural Networks
Radial Basis Function Network for Multitask Pattern Recognition
Neural Processing Letters
Classifying patterns with missing values using Multi-Task Learning perceptrons
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Biasing properly the hypothesis space of a learner has been shown to improve generalization performance. Methods for achieving this goal have been proposed, that range from designing and introducing a bias into a learner to automatically learning the bias. Multitask learning methods fall into the latter category. When several related tasks derived from the same domain are available, these methods use the domain-related knowledge coded in the training examples of all the tasks as a source of bias. We extend some of the ideas presented in this field and describe a new approach that identifies a family of hypotheses, represented by a manifold in hypothesis space, that embodies domain-related knowledge. This family is learned using training examples sampled from a group of related tasks. Learning models trained on these tasks are only allowed to select hypotheses that belong to the family. We show that the new approach encompasses a large variety of families which can be learned. A statistical analysis on a class of related tasks is performed that shows significantly improved performances when using this approach.