Elements of information theory
Elements of information theory
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
A Linear Non-Gaussian Acyclic Model for Causal Discovery
The Journal of Machine Learning Research
Proceedings of the 25th international conference on Machine learning
Using Markov Blankets for Causal Structure Learning
The Journal of Machine Learning Research
ICA with Sparse Connections: Revisited
ICA '09 Proceedings of the 8th International Conference on Independent Component Analysis and Signal Separation
Distribution-free learning of Bayesian network structure in continuous domains
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Fast and robust fixed-point algorithms for independent component analysis
IEEE Transactions on Neural Networks
Estimation of a Structural Vector Autoregression Model Using Non-Gaussianity
The Journal of Machine Learning Research
Gaussianity measures for detecting the direction of causal time series
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Statistical tests for the detection of the arrow of time in vector autoregressive models
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
We consider causally sufficient acyclic causal models in which the relationship among the variables is nonlinear while disturbances have linear effects, and show that three principles, namely, the causal Markov condition (together with the independence between each disturbance and the corresponding parents), minimum disturbance entropy, and mutual independence of the disturbances, are equivalent. This motivates new and more efficient methods for some causal discovery problems. In particular, we propose to use multichannel blind deconvolution, an extension of independent component analysis, to do Granger causality analysis with instantaneous effects. This approach gives more accurate estimates of the parameters and can easily incorporate sparsity constraints. For additive disturbance-based nonlinear causal discovery, we first make use of the conditional independence relationships to obtain the equivalence class; undetermined causal directions are then found by nonlinear regression and pairwise independence tests. This avoids the brute-force search and greatly reduces the computational load.