TY  - GEN
A1  - Yang, Mengyue
A1  - Liu, Furui
A1  - Chen, Zhitang
A1  - Shen, Xinwei
A1  - Hao, Jianye
A1  - Wang, Jun
UR  - https://doi.org/110.1109/CVPR46437.2021.00947
SN  - 1063-6919
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
SP  - 9588
T3  - IEEE Conference on Computer Vision and Pattern Recognition
KW  - Science & Technology
KW  -  Technology
KW  -  Computer Science
KW  -  Artificial Intelligence
KW  -  Imaging Science & Photographic Technology
KW  -  Computer Science
CY  - Nashville, TN, USA
PB  - IEEE
N2  - Learning disentanglement aims at finding a low dimensional representation which consists of multiple explanatory and generative factors of the observational data. The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations. However, in real scenarios, factors with semantics are not necessarily independent. Instead, there might be an underlying causal structure which renders these factors dependent. We thus propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent exogenous factors into causal endogenous ones that correspond to causally related concepts in data. We further analyze the model identifiabitily, showing that the proposed model learned from observations recovers the true one up to a certain degree. Experiments are conducted on various datasets, including synthetic and real word benchmark CelebA. Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy. Furthermore, we demonstrate that the proposed CausalVAE model is able to generate counterfactual data through ?do-operation? to the causal factors.
ID  - discovery10142668
AV  - public
Y1  - 2021/11/13/
EP  - 9597
TI  - CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models
ER  -