Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Theory refinement on Bayesian networks
Proceedings of the seventh conference (1991) on Uncertainty in artificial intelligence
Approximating probabilistic inference in Bayesian belief networks is NP-hard
Artificial Intelligence
Neurofuzzy adaptive modelling and control
Neurofuzzy adaptive modelling and control
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Support vector regression with ANOVA decomposition kernels
Advances in kernel methods
A tutorial on learning with Bayesian networks
Learning in graphical models
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks for Conditional Probability Estimation: Forecasting beyond Point Predictions
Neural Networks for Conditional Probability Estimation: Forecasting beyond Point Predictions
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Machine Learning
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Network Performance Assessment for Neurofuzzy Data Modelling
IDA '97 Proceedings of the Second International Symposium on Advances in Intelligent Data Analysis, Reasoning about Data
An Equivalence Between Sparse Approximation and Support Vector Machines
An Equivalence Between Sparse Approximation and Support Vector Machines
Smoothing Regularizers for Projective Basis Function Networks
Smoothing Regularizers for Projective Basis Function Networks
Evaluation of gaussian processes and other methods for non-linear regression
Evaluation of gaussian processes and other methods for non-linear regression
Machine Learning
Gradient LASSO for feature selection
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Nomograms for visualizing support vector machines
Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining
Hierarchic Bayesian models for kernel learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Feature space perspectives for learning the kernel
Machine Learning
Artificial Intelligence in Medicine
Computational Intelligence and Neuroscience - EEG/MEG Signal Processing
Regressor and structure selection in NARX models using a structured ANOVA approach
Automatica (Journal of IFAC)
Using Kernel Basis with Relevance Vector Machine for Feature Selection
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
Componentwise support vector machines for structure detection
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Expert Systems with Applications: An International Journal
ANOVA kernels and RKHS of zero mean functions for model-based sensitivity analysis
Journal of Multivariate Analysis
White box radial basis function classifiers with component selection for clinical prediction models
Artificial Intelligence in Medicine
Hi-index | 0.00 |
A widely acknowledged drawback of many statistical modelling techniques, commonly used in machine learning, is that the resulting model is extremely difficult to interpret. A number of new concepts and algorithms have been introduced by researchers to address this problem. They focus primarily on determining which inputs are relevant in predicting the output. This work describes a transparent, advanced non-linear modelling approach that enables the constructed predictive models to be visualised, allowing model validation and assisting in interpretation. The technique combines the representational advantage of a sparse ANOVA decomposition, with the good generalisation ability of a kernel machine. It achieves this by employing two forms of regularisation: a 1-norm based structural regulariser to enforce transparency, and a 2-norm based regulariser to control smoothness. The resulting model structure can be visualised showing the overall effects of different inputs, their interactions, and the strength of the interactions. The robustness of the technique is illustrated using a range of both artifical and “real world” datasets. The performance is compared to other modelling techniques, and it is shown to exhibit competitive generalisation performance together with improved interpretability.