TY  - GEN
N1  - This version is the author accepted manuscript. For information on re-use, please refer to the publisher?s terms and conditions.
PB  - IEEE
SP  - 5737
TI  - Interpretable transformations with Encoder-Decoder Networks
Y1  - 2017/12/25/
T3  - IEEE International Conference on Computer Vision (ICCV)
UR  - https://doi.org/10.1109/ICCV.2017.611
A1  - Worrall, DE
A1  - Garbin, SJ
A1  - Turmukhambetov, D
A1  - Brostow, GJ
EP  - 5746
AV  - public
N2  - Deep feature spaces have the capacity to encode complex transformations of their input data. However, understanding the relative feature-space relationship between two transformed encoded images is difficult. For instance, what is the relative feature space relationship between two rotated images? What is decoded when we interpolate in feature space? Ideally, we want to disentangle confounding factors, such as pose, appearance, and illumination, from object identity. Disentangling these is difficult because they interact in very nonlinear ways. We propose a simple method to construct a deep feature space, with explicitly disentangled representations of several known transformations. A person or algorithm can then manipulate the disentangled representation, for example, to re-render an image with explicit control over parameterized degrees of freedom. The feature space is constructed using a transforming encoder-decoder network with a custom feature transform layer, acting on the hidden representations. We demonstrate the advantages of explicit disentangling on a variety of datasets and transformations, and as an aid for traditional tasks, such as classification.
KW  - Three-dimensional displays
KW  -  Two dimensional displays
KW  -  Aerospace electronics
KW  -  Feature extraction
KW  -  Transforms
KW  -  Training
ID  - discovery10039218
CY  - Venice, Italy
ER  -