Neural networks and the bias/variance dilemma
Neural Computation
Artificial Intelligence Review - Special issue on lazy learning
On Similarity Queries for Time-Series Data: Constraint Specification and Implementation
CP '95 Proceedings of the First International Conference on Principles and Practice of Constraint Programming
Asymptotic behaviors of support vector machines with Gaussian kernel
Neural Computation
Ensemble Learning: A Study on Different Variants of the Dynamic Selection Approach
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Nearest neighbor forecasting models are attractive with their simplicity and the ability to predict complex nonlinear behavior. They rely on the assumption that observations similar to the target one are also likely to have similar outcomes. A common practice in nearest neighbor model selection is to compute the globally optimal number of neighbors on a validation set, which is later applied for all incoming queries. For certain queries, however, this number may be suboptimal and forecasts that deviate a lot from the true realization could be produced. To address the problem we propose an alternative approach of training ensembles of nearest neighbor predictors that determine the best number of neighbors for individual queries. We demonstrate that the forecasts of the ensembles improve significantly on the globally optimal single predictors.