eprintid: 10142668
rev_number: 6
eprint_status: archive
userid: 699
dir: disk0/10/14/26/68
datestamp: 2022-02-01 14:11:22
lastmod: 2022-02-01 14:11:22
status_changed: 2022-02-01 14:11:22
type: proceedings_section
metadata_visibility: show
sword_depositor: 699
creators_name: Yang, Mengyue
creators_name: Liu, Furui
creators_name: Chen, Zhitang
creators_name: Shen, Xinwei
creators_name: Hao, Jianye
creators_name: Wang, Jun
title: CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models
ispublished: pub
divisions: C05
divisions: F48
divisions: B04
divisions: UCL
keywords: Science & Technology, Technology, Computer Science, Artificial Intelligence, Imaging Science & Photographic Technology, Computer Science
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: Learning disentanglement aims at finding a low dimensional representation which consists of multiple explanatory and generative factors of the observational data. The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations. However, in real scenarios, factors with semantics are not necessarily independent. Instead, there might be an underlying causal structure which renders these factors dependent. We thus propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent exogenous factors into causal endogenous ones that correspond to causally related concepts in data. We further analyze the model identifiabitily, showing that the proposed model learned from observations recovers the true one up to a certain degree. Experiments are conducted on various datasets, including synthetic and real word benchmark CelebA. Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy. Furthermore, we demonstrate that the proposed CausalVAE model is able to generate counterfactual data through “do-operation” to the causal factors.
date: 2021-11-13
date_type: published
publisher: IEEE
official_url: https://doi.org/110.1109/CVPR46437.2021.00947
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1936690
doi: 10.1109/CVPR46437.2021.00947
isbn_13: 9781665445092
lyricists_name: Wang, Jun
lyricists_name: Yang, Mengyue
lyricists_id: JWANG00
lyricists_id: MYANB08
actors_name: Flynn, Bernadette
actors_id: BFFLY94
actors_role: owner
full_text_status: public
pres_type: paper
series: IEEE Conference on Computer Vision and Pattern Recognition
publication: 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021
place_of_pub: Nashville, TN, USA
pagerange: 9588-9597
pages: 10
event_title: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
event_location: ELECTR NETWORK
event_dates: 19 Jun 2021 - 25 Jun 2021
issn: 1063-6919
book_title: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
citation:        Yang, Mengyue;    Liu, Furui;    Chen, Zhitang;    Shen, Xinwei;    Hao, Jianye;    Wang, Jun;      (2021)    CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models.                     In:  2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).  (pp. pp. 9588-9597).  IEEE: Nashville, TN, USA.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10142668/1/2004.08697.pdf