Sztrajman, Alejandro;
(2022)
Machine Learning Applications in Appearance Modelling.
Doctoral thesis (Ph.D), UCL (University College London).
Preview |
Text
asztrajman_PhD_thesis.pdf - Accepted Version Download (177MB) | Preview |
Abstract
In this thesis, we address multiple applications of machine learning in appearance modelling. We do so by leveraging data-driven approaches, guided through the use of image-based error metrics, to generate new representations of material appearance and scene illumination. We first address the interchange of material appearance between different analytic representations, through an image-based optimisation of BRDF model parameters. We analyse the method in terms of stability with respect to variations of the BRDF parameters, and show that it can be used for material interchange between different renderers and workflows, without the need to access shader implementations. We extend our method to enable the remapping of spatially-varying materials, by presenting two regression schemes that allow us to learn the transformation of parameters between models and apply it to texture maps at fast rates. Next, we centre on the efficient representation and rendering of measured material appearance. We develop a neural-based BRDF representation that provides high-quality reconstruction with low storage and competitive evaluation times, comparable with analytic models. Our method compares favourably against other representations in terms of reconstruction accuracy, and we show that it can be also used to encode anisotropic materials. In addition, we generate a unified encoding of real-world materials via a meta-learning autoencoder architecture guided by a differential rendering loss. This enables the generation of new realistic materials by interpolation of embeddings, and the fast estimation of material properties. We show that this can be leveraged for efficient rendering through importance sampling, by predicting the parameters of an invertible analytic BRDF model. Finally, we design a hybrid representation for high-dynamic-range illumination that combines a convolutional autoencoder-based encoding for low-intensity light, and a parametric model for high intensity. Our model provides a flexible compact encoding for environment maps, while also preserving an accurate reconstruction of the high-intensity component, appropriate for rendering purposes. We utilise our light encodings in a second convolutional neural network trained for light prediction from single outdoor face portrait at interactive rates, with potential applications for real-time light prediction and 3D object insertion.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Machine Learning Applications in Appearance Modelling |
Open access status: | An open access version is available from UCL Discovery |
Language: | English |
Additional information: | Copyright © The Author 2022. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science UCL > Provost and Vice Provost Offices > UCL BEAMS UCL |
URI: | https://discovery.ucl.ac.uk/id/eprint/10152596 |
Archive Staff Only
View Item |