Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Expert Systems with Applications: An International Journal
Bootstrap prediction for returns and volatilities in GARCH models
Computational Statistics & Data Analysis
Risk management application of the recurrent mixture density network models
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Using GARCH-GRNN model to forecast financial time series
ISCIS'05 Proceedings of the 20th international conference on Computer and Information Sciences
Bayesian estimation of generalized hyperbolic skewed student GARCH models
Computational Statistics & Data Analysis
Hi-index | 0.01 |
This paper presents an improved nonlinear mixture density approach to modeling the time-dependent variance in time series. First, we elaborate a recurrent mixture density network for explicit modeling of the time conditional mixing coefficients, as well as the means and variances of its Gaussian mixture components. Second, we derive training equations with which all the network weights are inferred in the maximum likelihood framework. Crucially, we calculate temporal derivatives through time for dynamic estimation of the variance network parameters. Experimental results show that, when compared with a traditional linear heteroskedastic model, as well as with the nonlinear mixture density network trained with static derivatives, our dynamic recurrent network converges to more accurate results with better statistical characteristics and economic performance.