Pombo, G;
Gray, R;
Cardoso, MJ;
Ourselin, S;
Rees, G;
Ashburner, J;
Nachev, P;
(2023)
Equitable modelling of brain imaging by counterfactual augmentation with morphologically constrained 3D deep generative models.
Medical Image Analysis
, 84
, Article 102723. 10.1016/j.media.2022.102723.
(In press).
Preview |
Text
1-s2.0-S1361841522003516-main.pdf - Published Version Download (6MB) | Preview |
Abstract
We describe CounterSynth, a conditional generative model of diffeomorphic deformations that induce label-driven, biologically plausible changes in volumetric brain images. The model is intended to synthesise counterfactual training data augmentations for downstream discriminative modelling tasks where fidelity is limited by data imbalance, distributional instability, confounding, or underspecification, and exhibits inequitable performance across distinct subpopulations. Focusing on demographic attributes, we evaluate the quality of synthesised counterfactuals with voxel-based morphometry, classification and regression of the conditioning attributes, and the Fréchet inception distance. Examining downstream discriminative performance in the context of engineered demographic imbalance and confounding, we use UK Biobank and OASIS magnetic resonance imaging data to benchmark CounterSynth augmentation against current solutions to these problems. We achieve state-of-the-art improvements, both in overall fidelity and equity. The source code for CounterSynth is available at https://github.com/guilherme-pombo/CounterSynth.
Archive Staff Only
View Item |