Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Mixtures-of-experts of autoregressive time series: asymptotic normality and model specification
IEEE Transactions on Neural Networks
Multivariate out-of-sample tests for Granger causality
Computational Statistics & Data Analysis
Discrimination of locally stationary time series using wavelets
Computational Statistics & Data Analysis
A hypothesis test using bias-adjusted AR estimators for classifying time series in small samples
Computational Statistics & Data Analysis
Hi-index | 0.03 |
A novel class of models for multivariate time series is presented. We consider hierarchical mixture-of-expert (HME) models in which the experts, or building blocks of the model, are vector autoregressions (VAR). It is assumed that the VAR-HME model partitions the covariate space, specifically including time as a covariate, into overlapping regions called overlays. In each overlay a given number of VAR experts compete with each other so that the most suitable one for the overlay is favored by a large weight. The weights have a particular parametric form that allows the modeler to include relevant covariates. Estimation of the model parameters is achieved via the EM (expectation-maximization) algorithm. A new algorithm to select the optimal number of overlays, the number of VAR models and the model orders of the VARs that define a particular VAR-HME model configuration, is also developed. The algorithm uses the Bayesian information criterion (BIC) as an optimality criterion. Issues of model checking and inference of latent structure in multiple time series are investigated. The new methodology is illustrated by analyzing a synthetic data set and a 7-channel electroencephalogram data set.