The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Boosting regression estimators
Neural Computation
Boosting Methods for Regression
Machine Learning
Hi-index | 0.00 |
We present an algorithm for improving the accuracy of recurrent neural networks (RNNs) for time series forecasting. The improvement is achieved by combining a large number of RNNs, each of them is generated by training on a different set of examples. This algorithm is based on the boosting algorithm and allows concentrating the training on difficult examples but, unlike the original algorithm, by taking into account all the available examples. We study the behavior of our method applied on three time series of reference with three loss functions and with different values of a parameter. We compare the performances obtained with other regression methods.