The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Boosting regression estimators
Neural Computation
Boosting Methods for Regression
Machine Learning
Applying LSTM to Time Series Predictable through Time-Window Approaches
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Computational capabilities of recurrent NARX neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Neural Networks
Prediction of chaotic time series using neural network
NN'09 Proceedings of the 10th WSEAS international conference on Neural networks
Hi-index | 0.00 |
This paper discusses the use of a recent boosting algorithm for recurrent neural networks as a tool to model nonlinear dynamical systems. It combines a large number of RNNs, each of which is generated by training on a different set of examples. This algorithm is based on the boosting algorithm where difficult examples are concentrated on during the learning process. However, unlike the original algorithm, all examples available are taken into account. The ability of the method to internally encode useful information on the underlying process is illustrated by several experiments on well known chaotic processes. Our model is able to find an appropriate internal representation of the underlying process from the observation of a subset of the states variables. We obtain improved prediction performances.