eprintid: 10198875
rev_number: 7
eprint_status: archive
userid: 699
dir: disk0/10/19/88/75
datestamp: 2024-10-24 15:15:00
lastmod: 2024-10-24 15:15:00
status_changed: 2024-10-24 15:15:00
type: proceedings_section
metadata_visibility: show
sword_depositor: 699
creators_name: della Maggiora, G
creators_name: Croquevielle, LA
creators_name: Deshpande, N
creators_name: Horsley, H
creators_name: Heinis, T
creators_name: Yakimovich, A
title: Conditional Variational Diffusion Models
ispublished: pub
divisions: UCL
divisions: B02
divisions: C10
divisions: D17
divisions: G93
keywords: Denoising Diffusion Probabilistic Models, Inverse Problems, Generative Models, Super Resolution, Phase Quantification, Variational Methods
note: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: Inverse problems aim to determine parameters from observations, a crucial task in engineering and science. Lately, generative models, especially diffusion models, have gained popularity in this area for their ability to produce realistic solutions and their good mathematical properties. Despite their success, an important drawback of diffusion models is their sensitivity to the choice of variance schedule, which controls the dynamics of the diffusion process. Fine-tuning this schedule for specific applications is crucial but time-consuming and does not guarantee an optimal result. We propose a novel approach for learning the schedule as part of the training process. Our method supports probabilistic conditioning on data, provides high-quality solutions, and is flexible, proving able to adapt to different applications with minimum overhead. This approach is tested in two unrelated inverse problems: super-resolution microscopy and quantitative phase imaging, yielding comparable or superior results to previous methods and fine-tuned diffusion models. We conclude that fine-tuning the schedule by experimentation should be avoided because it can be learned during training in a stable way that yields better results.
date: 2024-01-16
date_type: published
publisher: International Conference on Learning Representations (ICLR)
official_url: https://openreview.net/forum?id=YOKnEkIuoi
oa_status: green
full_text_type: pub
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 2329524
lyricists_name: Horsley, Harry
lyricists_id: HHORS59
actors_name: Flynn, Bernadette
actors_id: BFFLY94
actors_role: owner
full_text_status: public
pres_type: paper
series: ICLR
publication: 12th International Conference on Learning Representations, ICLR 2024
volume: 2024
event_title: 12th International Conference on Learning Representations, ICLR 2024
book_title: 12th International Conference on Learning Representations, ICLR 2024
citation:        della Maggiora, G;    Croquevielle, LA;    Deshpande, N;    Horsley, H;    Heinis, T;    Yakimovich, A;      (2024)    Conditional Variational Diffusion Models.                     In:  12th International Conference on Learning Representations, ICLR 2024.    International Conference on Learning Representations (ICLR)       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10198875/1/5967_Conditional_Variational_D.pdf