UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

MNSS: Neural Supersampling Framework for Real-Time Rendering on Mobile Devices

Yang, Sipeng; Zhao, Yunlu; Luo, Yuzhe; Wang, He; Sun, Hongyu; Li, Chen; Cai, Binghuang; (2024) MNSS: Neural Supersampling Framework for Real-Time Rendering on Mobile Devices. IEEE Transactions on Visualization and Computer Graphics , 30 (7) 4271 -4284. 10.1109/tvcg.2023.3259141. Green open access

[thumbnail of 2409.18401v1.pdf]
Preview
Text
2409.18401v1.pdf - Accepted Version

Download (32MB) | Preview

Abstract

Although neural supersampling has achieved great success in various applications for improving image quality, it is still difficult to apply it to a wide range of real-time rendering applications due to the high computational power demand. Most existing methods are computationally expensive and require high-performance hardware, preventing their use on platforms with limited hardware, such as smartphones. To this end, we propose a new supersampling framework for real-time rendering applications to reconstruct a high-quality image out of a low-resolution one, which is sufficiently lightweight to run on smartphones within a real-time budget. Our model takes as input the renderer-generated low resolution content and produces high resolution and anti-aliased results. To maximize sampling efficiency, we propose using an alternate sub-pixel sample pattern during the rasterization process. This allows us to create a relatively small reconstruction model while maintaining high image quality. By accumulating new samples into a high-resolution history buffer, an efficient history check and re-usage scheme is introduced to improve temporal stability. To our knowledge, this is the first research in pushing real-time neural supersampling on mobile devices. Due to the absence of training data, we present a new dataset containing 57 training and test sequences from three game scenes. Furthermore, based on the rendered motion vectors and a visual perception study, we introduce a new metric called inter-frame structural similarity (IF-SSIM) to quantitatively measure the temporal stability of rendered videos. Extensive evaluations demonstrate that our supersampling model outperforms existing or alternative solutions in both performance and temporal stability.

Type: Article
Title: MNSS: Neural Supersampling Framework for Real-Time Rendering on Mobile Devices
Location: United States
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/tvcg.2023.3259141
Publisher version: https://doi.org/10.1109/tvcg.2023.3259141
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Deep learning; neural supersampling; real-time rendering
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10215223
Downloads since deposit
1Download
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item