The inverse gaussian distribution: theory, methodology, and applications
The inverse gaussian distribution: theory, methodology, and applications
Independent component analysis, a new concept?
Signal Processing - Special issue on higher order statistics
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
A general class of multivariate skew-elliptical distributions
Journal of Multivariate Analysis
Statistics and Computing
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Dependency networks for inference, collaborative filtering, and data visualization
The Journal of Machine Learning Research
A Linear Non-Gaussian Acyclic Model for Causal Discovery
The Journal of Machine Learning Research
Robust multi-task learning with t-processes
Proceedings of the 24th international conference on Machine learning
"Ideal Parent" Structure Learning for Continuous Variable Bayesian Networks
The Journal of Machine Learning Research
Estimation of causal effects using linear non-Gaussian causal models with hidden variables
International Journal of Approximate Reasoning
Learning graphical model structure using L1-regularization paths
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Infinite sparse factor analysis and infinite independent components analysis
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
On the identifiability of the post-nonlinear causal model
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Learning bayesian network structure from massive datasets: the «sparse candidate« algorithm
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bayesian hierarchy for sparse models using slab and spike priors (two-component δ-function and continuous mixtures), non-Gaussian latent factors and a stochastic search over the ordering of the variables. The framework, which we call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable computational complexity. We attribute this mainly to the stochastic search strategy used, and to parsimony (sparsity and identifiability), which is an explicit part of the model. We propose two extensions to the basic i.i.d. linear framework: non-linear dependence on observed variables, called SNIM (Sparse Non-linear Identifiable Multivariate modeling) and allowing for correlations between latent variables, called CSLIM (Correlated SLIM), for the temporal and/or spatial data. The source code and scripts are available from http://cogsys.imm.dtu.dk/slim/.