Arbel, M;
Korba, A;
Salim, A;
Gretton, A;
(2019)
Maximum Mean Discrepancy Gradient Flow.
In: Wallach, H and Larochelle, H and Beygelzimer, A and d'Alché-Buc, F and Fox, E and Garnett, R, (eds.)
Advances in Neural Information Processing Systems 32 (NIPS 2019).
NIPS Proceedingsβ: Vancouver, Canada.
Preview |
Text
Gretton_8876-maximum-mean-discrepancy-gradient-flow.pdf - Published Version Download (1MB) | Preview |
Abstract
We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties. The MMD is an integral probability metric defined for a reproducing kernel Hilbert space (RKHS), and serves as a metric on probability measures for a sufficiently rich RKHS. We obtain conditions for convergence of the gradient flow towards a global optimum, that can be related to particle transport when optimizing neural networks. We also propose a way to regularize this MMD flow, based on an injection of noise in the gradient. This algorithmic fix comes with theoretical and empirical evidence. The practical implementation of the flow is straightforward, since both the MMD and its gradient have simple closed-form expressions, which can be easily estimated with samples.
Archive Staff Only
View Item |