Machine Learning
Machine Learning
Bayesian support vector regression using a unified loss function
IEEE Transactions on Neural Networks
Cases for the nugget in modeling computer experiments
Statistics and Computing
Calibration of computer models with multivariate output
Computational Statistics & Data Analysis
Computational Statistics & Data Analysis
Hi-index | 0.03 |
Gaussian processes retain the linear model either as a special case, or in the limit. We show how this relationship can be exploited when the data are at least partially linear. However from the perspective of the Bayesian posterior, the Gaussian processes which encode the linear model either have probability of nearly zero or are otherwise unattainable without the explicit construction of a prior with the limiting linear model in mind. We develop such a prior, and show that its practical benefits extend well beyond the computational and conceptual simplicity of the linear model. For example, linearity can be extracted on a per-dimension basis, or can be combined with treed partition models to yield a highly efficient nonstationary model. Our approach is demonstrated on synthetic and real datasets of varying linearity and dimensionality.