Concave Gaussian variational approximations for inference in large-scale Bayesian linear models.
In: Gordon, G and Dunson, D, (eds.)
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics.
(pp. 199 - 207).
Journal of Machine Learning Research
Download (1MB) | Preview
Two popular approaches to forming bounds in approximate Bayesian inference are local variational methods and minimal Kullback-Leibler divergence methods. For a large class of models we explicitly relate the two approaches, showing that the local variational method is equivalent to a weakened form of Kullback-Leibler Gaussian approximation. This gives a strong motivation to develop efficient methods for KL minimisation. An important and previously unproven property of the KL variational Gaussian bound is that it is a concave function in the parameters of the Gaussian for log concave sites. This observation, along with compact concave parametrisations of the covariance, enables us to develop fast scalable optimisation procedures to obtain lower bounds on the marginal likelihood in large scale Bayesian linear models. Copyright 2011 by the authors.
|Title:||Concave Gaussian variational approximations for inference in large-scale Bayesian linear models|
|Event:||Fourteenth International Conference on Artificial Intelligence and Statistics|
|Open access status:||An open access version is available from UCL Discovery|
|Additional information:||Copyright 2011 by the authors.|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science
UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only