Learning from hints in neural networks
Journal of Complexity
A resource-allocating network for function interpolation
Neural Computation
A Bayesian/Information Theoretic Model of Learning to Learn viaMultiple Task Sampling
Machine Learning - Special issue on inductive transfer
Machine Learning - Special issue on inductive transfer
Learning to learn
Evolving Connectionist Systems: Methods and Applications in Bioinformatics, Brain Study and Intelligent Machines
The Task Rehearsal Method of Life-Long Learning: Overcoming Impoverished Data
AI '02 Proceedings of the 15th Conference of the Canadian Society for Computational Studies of Intelligence on Advances in Artificial Intelligence
Learning One More Thing
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
2005 Special issue: Incremental learning of feature space and classifier for face recognition
Neural Networks - 2005 Special issue: IJCNN 2005
Detecting change in data streams
VLDB '04 Proceedings of the Thirtieth international conference on Very large data bases - Volume 30
Incremental linear discriminant analysis for classification of data streams
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Iterative generation of higher-order nets in polynomial time using linear programming
IEEE Transactions on Neural Networks
A neural-network learning theory and a polynomial time RBF algorithm
IEEE Transactions on Neural Networks
Bias learning, knowledge sharing
IEEE Transactions on Neural Networks
Incremental Learning of Chunk Data for Online Pattern Classification Systems
IEEE Transactions on Neural Networks
Radial Basis Function Network for Multitask Pattern Recognition
Neural Processing Letters
Multi-task learning with one-class SVM
Neurocomputing
Hi-index | 0.00 |
This paper presents a new learning algorithm for multitask pattern recognition (MTPR) problems. We consider learning multiple multiclass classification tasks online where no information is ever provided about the task category of a training example. The algorithm thus needs an automated task recognition capability to properly learn the different classification tasks. The learning mode is "online" where training examples for different tasks are mixed in a random fashion and given sequentially one after another. We assume that the classification tasks are related to each other and that both the tasks and their training examples appear in random during "online training." Thus, the learning algorithm has to continually switch from learning one task to another whenever the training examples change to a different task. This also implies that the learning algorithm has to detect task changes automatically and utilize knowledge of previous tasks for learning new tasks fast. The performance of the algorithm is evaluated for ten MTPR problems using five University of California at Irvine (UCI) data sets. The experiments verify that the proposed algorithm can indeed acquire and accumulate task knowledge and that the transfer of knowledge from tasks already learned enhances the speed of knowledge acquisition on new tasks and the final classification accuracy. In addition, the task categorization accuracy is greatly improved for all MTPR problems by introducing the reorganization process even if the presentation order of class training examples is fairly biased.