Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural network models for time series forecasts
Management Science
A simulation study of artificial neural networks for nonlinear time-series forecasting
Computers and Operations Research
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks for Statistical Modeling
Neural Networks for Statistical Modeling
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
An extended evaluation framework for neural network publications in sales forecasting
AIA'06 Proceedings of the 24th IASTED international conference on Artificial intelligence and applications
Expert Systems with Applications: An International Journal
Approximation and prediction of wages based on granular neural network
RSKT'08 Proceedings of the 3rd international conference on Rough sets and knowledge technology
MULP: a multi-layer perceptron application to long-term, out-of-sample time series prediction
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part II
Prediction of accumulated temperature in vegetation period using artificial neural network
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.01 |
The comparative accuracy of feedforward neural networks (NN) when applied to time series forecasting problems remains uncertain. This is because most studies suffer from either of two defects--they choose the NN from a wide range of alternatives in order to present the forecast accuracy results in the best light or they do not compare the results with suitable benchmarks. In order to overcome both these objections, this paper proposes an objective procedure for specifying a feedforward NN model and evaluates its effectiveness by examining its forecasting performance compared with established benchmarks. After the selection of input nodes based on cross-validation, a three-stage procedure is proposed here which consists of sequentially selecting first the learning rate followed by the number of nodes and the initial weights. This paper shows that NNs only perform robustly if they are built by considering these three factors jointly. In an empirical demonstration of the strength of the approach, those NN models, built by considering all three factors, performed better than other competitive statistical methods when evaluated rigorously on a standard test data set.