Computation with infinite neural networks
Neural Computation
Neural Network Modelling with Input Uncertainty: Theory and Application
Journal of VLSI Signal Processing Systems
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Bayesian regression with input noise for high dimensional data
ICML '06 Proceedings of the 23rd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Overview of total least-squares methods
Signal Processing
Bayesian reinforcement learning in continuous pomdps with Gaussian processes
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Hi-index | 0.01 |
Most formulations of supervised learning are often based on the assumption that only the outputs data are uncertain. However, this assumption might be too strong for some learning tasks. This paper investigates the use of Gaussian processes to infer latent functions from a set of uncertain input-output examples. By assuming Gaussian distributions with known variances over the inputs-outputs and using the expectation of the covariance function, it is possible to analytically compute the expected covariance matrix of the data to obtain a posterior distribution over functions. The method is evaluated on a synthetic problem and on a more realistic one, which consist in learning the dynamics of a cart-pole balancing task. The results indicate an improvement of the mean squared error and the likelihood of the posterior Gaussian process when the data uncertainty is significant.