eprintid: 10071737
rev_number: 28
eprint_status: archive
userid: 608
dir: disk0/10/07/17/37
datestamp: 2019-04-05 15:54:07
lastmod: 2021-10-13 23:30:43
status_changed: 2019-10-22 15:47:29
type: article
metadata_visibility: show
creators_name: Rainer, G
creators_name: Jakob, W
creators_name: Ghosh, A
creators_name: Weyrich, T
title: Neural BTF Compression and Interpolation
ispublished: pub
divisions: UCL
divisions: B04
divisions: C05
divisions: F48
note: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
abstract: The Bidirectional Texture Function (BTF) is a data‐driven solution to render materials with complex appearance. A typical capture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capable of faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, both for storing and rendering, making effective compression of BTF data a critical component in practical applications. Common compression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format of the original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset still relies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, this can lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data, using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfully recreate materials with complex non‐local lighting effects (subsurface scattering, inter‐reflections, shadowing and masking…). In light of these observations, we propose a neural network‐based BTF representation inspired by autoencoders: our encoder compresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view direction and outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and view hemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFs with a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios and high‐quality interpolation/extrapolation without blurring or ghosting artifacts.
date: 2019-05
date_type: published
publisher: Eurographics Association
official_url: https://doi.org/10.1111/cgf.13633
oa_status: green
full_text_type: other
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 1644793
doi: 10.1111/cgf.13633
lyricists_name: Rainer, Gilles
lyricists_name: Weyrich, Tim
lyricists_id: GCRAI40
lyricists_id: TAWEY36
actors_name: Weyrich, Tim
actors_id: TAWEY36
actors_role: owner
full_text_status: public
publication: Computer Graphics Forum (Proc. Eurographics)
volume: 38
number: 2
pagerange: 235-244
citation:        Rainer, G;    Jakob, W;    Ghosh, A;    Weyrich, T;      (2019)    Neural BTF Compression and Interpolation.                   Computer Graphics Forum (Proc. Eurographics) , 38  (2)   pp. 235-244.    10.1111/cgf.13633 <https://doi.org/10.1111/cgf.13633>.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10071737/1/rainer19neural.pdf