TY  - JOUR
EP  - 47
IS  - 105
KW  - Gaussian processes
KW  -  approximate posteriors
KW  -  efficient sampling
A1  - Wilson, JT
A1  - Borovitskiy, V
A1  - Terenin, A
A1  - Mostowsky, P
A1  - Deisenroth, MP
SP  - 1
AV  - public
TI  - Pathwise Conditioning of Gaussian Processes
PB  - arXiv
UR  - https://jmlr.org/papers/v22/20-1260.html
CY  - Ithaca, NY, USA
ID  - discovery10117306
VL  - 22
JF  - Journal of Machine Learning Research
N1  - This is an Open Access article published under a Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/).
N2  - As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer. Monte Carlo methods act as a convenient bridge for connecting intractable mathematical expressions with actionable estimates via sampling. Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations. This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector. These methods are prohibitively expensive in cases where we would, ideally, like to draw high-dimensional vectors or even continuous sample paths. In this work, we investigate a different line of reasoning: rather than focusing on distributions, we articulate Gaussian conditionals at the level of random variables. We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors. Starting from first principles, we derive these methods and analyze the approximation errors they introduce. We, then, ground these results by exploring the practical implications of pathwise conditioning in various applied settings, such as global optimization and reinforcement learning.
Y1  - 2021/05//
ER  -