Active Learning with Local Models
Neural Processing Letters
Supervised Training Using an Unsupervised Approach to Active Learning
Neural Processing Letters
Selective Learning for Multilayer Feedforward Neural Networks
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Call and response: experiments in sampling the environment
SenSys '04 Proceedings of the 2nd international conference on Embedded networked sensor systems
Sensitivity Analysis for Selective Learning by Feedforward Neural Networks
Fundamenta Informaticae
Active learning with statistical models
Journal of Artificial Intelligence Research
Sensitivity Analysis for Selective Learning by Feedforward Neural Networks
Fundamenta Informaticae
Sensitivity Analysis for Selective Learning by Feedforward Neural Networks
Fundamenta Informaticae
Hi-index | 0.01 |
We consider the question ``How should one act when the only goal is to learn as much as possible?'''' Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated.