Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Causality: models, reasoning, and inference
Causality: models, reasoning, and inference
Independent component analysis: algorithms and applications
Neural Networks
On the Best Rank-1 and Rank-(R1,R2,. . .,RN) Approximation of Higher-Order Tensors
SIAM Journal on Matrix Analysis and Applications
Multilinear Independent Components Analysis
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
A Linear Non-Gaussian Acyclic Model for Causal Discovery
The Journal of Machine Learning Research
Proceedings of the 25th international conference on Machine learning
Tensor Decompositions and Applications
SIAM Review
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
Estimation of a Structural Vector Autoregression Model Using Non-Gaussianity
The Journal of Machine Learning Research
COMMODITY12: A smart e-health environment for diabetes management
Journal of Ambient Intelligence and Smart Environments - Design and Deployment of Intelligent Environments
Hi-index | 0.00 |
We propose a method for learning causal relations within high-dimensional tensor data as they are typically recorded in non-experimental databases. The method allows the simultaneous inclusion of numerous dimensions within the data analysis such as samples, time and domain variables construed as tensors. In such tensor data we exploit and integrate non-Gaussian models and tensor analytic algorithms in a novel way. We prove that we can determine simple causal relations independently of how complex the dimensionality of the data is. We rely on a statistical decomposition that flattens higher-dimensional data tensors into matrices. This decomposition preserves the causal information and is therefore suitable for structure learning of causal graphical models, where a causal relation can be generalised beyond dimension, for example, over all time points. Related methods either focus on a set of samples for instantaneous effects or look at one sample for effects at certain time points. We evaluate the resulting algorithm and discuss its performance both with synthetic and real-world data.