Delay structure conditions for identifiability of closed loop systems
Automatica (Journal of IFAC)
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
An introduction to variable and feature selection
The Journal of Machine Learning Research
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Subset selection for vector autoregressive processes using Lasso
Computational Statistics & Data Analysis
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
A new kernel-based approach for linear system identification
Automatica (Journal of IFAC)
Prediction error identification of linear systems: A nonparametric Gaussian regression approach
Automatica (Journal of IFAC)
Topology Selection in Graphical Models of Autoregressive Processes
The Journal of Machine Learning Research
Closed-loop identification revisited
Automatica (Journal of IFAC)
Consistency analysis of some closed-loop subspace identification methods
Automatica (Journal of IFAC)
IEEE Transactions on Information Theory
Latent Variable Bayesian Models for Promoting Sparsity
IEEE Transactions on Information Theory
Hi-index | 22.14 |
Modeling and identification of high dimensional systems, involving signals with many components, poses severe challenges to off-the-shelf techniques for system identification. This is particularly so when relatively small data sets, as compared to the number signal components, have to be used. It is often the case that each component of the measured signal can be described in terms of a few other measured variables and these dependences can be encoded in a graphical way via so called ''Dynamic Bayesian Networks''. The problem of finding the interconnection structure as well as estimating the dynamic models can be posed as a system identification problem which involves variable selection. While this variable selection could be performed via standard selection techniques, computational complexity may however be a critical issue, being combinatorial in the number of inputs and outputs. In this paper we introduce two new nonparametric techniques which borrow ideas from a recently introduced kernel estimator called ''stable-spline'' as well as from sparsity inducing priors which use @?"1-type penalties. Numerical experiments regarding estimation of large scale sparse (ARMAX) models show that this technique provides a definite advantage over a group LAR algorithm and state-of-the-art parametric identification techniques based on prediction error minimization.