Keeping the neural networks simple by minimizing the description length of the weights
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Ensemble learning for multi-layer networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Regression with input-dependent noise: a Gaussian process treatment
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Hierarchical models of variance sources
Signal Processing - Special issue on independent components analysis and beyond
Using neural networks to model conditional multivariate densities
Neural Computation
Hi-index | 0.00 |
In many applications of regression, the conditional average of the target variable is not sufficient for prediction. The dependencies between the explanatory variables and the target variable can be complex calling for modelling of the full conditional probability density. The ubiquitous problem with such methods is overfitting since due to the flexibility of the model the likelihood of any datapoint can be made arbitrarily large. In this paper a method for predicting uncertainty by modelling the conditional density is presented based on conditioning the scale parameter of the noise process on the explanatory variables. The model is constructed in such a manner that the unpredictability of the scale of the target distribution translates into a more robust predictive distribution. The overfitting problems are solved by learning the model using variational EM. The method is experimentally demonstrated with synthetic data as well as with real-world environmental data. The viability of the approach was put to test in the 'Predictive uncertainty in environmental modelling' competition held at WCCI'06. The proposed method won the competition.