Adding constrained discontinuities to Gaussian process models of wind fields
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Most likely heteroscedastic Gaussian process regression
Proceedings of the 24th international conference on Machine learning
Efficient failure detection on mobile robots using particle filters with Gaussian process proposals
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Efficient space-time modeling for informative sensing
Proceedings of the Sixth International Workshop on Knowledge Discovery from Sensor Data
A KNN based kalman filter Gaussian process regression
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Gaussian processes using nonstationary covariance functions are a powerful tool for Bayesian regression with input-dependent smoothness. A common approach is to model the local smoothness by a latent process that is integrated over using Markov chain Monte Carlo approaches. In this paper, we demonstrate that an approximation that uses the estimated mean of the local smoothness yields good results and allows one to employ efficient gradient-based optimization techniques for jointly learning the parameters of the latent and the observed processes. Extensive experiments on both synthetic and real-world data, including challenging problems in robotics, show the relevance and feasibility of our approach.