TY  - GEN
EP  - 25
SP  - 1
A1  - Shi, Y
A1  - Paige, B
A1  - Torr, PHS
A1  - Siddharth, N
AV  - public
ID  - discovery10171347
N1  - This version is the version of record. For information on re-use, please refer to the publisher's terms and conditions.
TI  - Relating by Contrasting: A Data-Efficient Framework for Multimodal DGMs
N2  - Multimodal learning for generative models often refers to the learning of abstract concepts from the commonality of information in multiple modalities, such as vision and language. While it has proven effective for learning generalisable representations, the training of such models often requires a large amount of ?related? multimodal data that shares commonality, which can be expensive to come by. To mitigate this, we develop a novel contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between ?related? and ?unrelated? multimodal data. We show in experiments that our method enables data-efficient multimodal learning on challenging datasets for various multimodal variational autoencoder (VAE) models. We also show that under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
PB  - ICLR
Y1  - 2021///
UR  - https://iclr.cc/Conferences/2021
ER  -