Neural networks and the bias/variance dilemma
Neural Computation
Information-based objective functions for active data selection
Neural Computation
Machine Learning - Special issue on reinforcement learning
Neural network exploration using optimal experiment design
Neural Networks
Active learning with statistical models
Journal of Artificial Intelligence Research
The subspace information criterion for infinite dimensional hypothesis spaces
The Journal of Machine Learning Research
Hi-index | 0.00 |
The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models. The effectiveness of this method is shown through computer simulations. The other is the optimal sampling method in trigonometric polynomial models. This method precisely specifies the optimal sampling locations.