UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Generalization Gap in Amortized Inference

Zhang, M; Hayes, P; Barber, D; (2022) Generalization Gap in Amortized Inference. In: Advances in Neural Information Processing Systems. NIPS Green open access

[thumbnail of 7245_generalization_gap_in_amortize.pdf]
Preview
PDF
7245_generalization_gap_in_amortize.pdf - Published Version

Download (638kB) | Preview

Abstract

The ability of likelihood-based probabilistic models to generalize to unseen data is central to many machine learning applications such as lossless compression. In this work, we study the generalization of a popular class of probabilistic model - the Variational Auto-Encoder (VAE). We discuss the two generalization gaps that affect VAEs and show that overfitting is usually dominated by amortized inference. Based on this observation, we propose a new training objective that improves the generalization of amortized inference. We demonstrate how our method can improve performance in the context of image modeling and lossless compression.

Type: Proceedings paper
Title: Generalization Gap in Amortized Inference
Event: 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Open access status: An open access version is available from UCL Discovery
Publisher version: https://papers.nips.cc/
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10173889
Downloads since deposit
31Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item