In the domain of computer graphics, achieving high visual quality in real-time rendering remains a formidable challenge due to the inherent time-quality tradeoff. Conventional real-time rendering engines sacrifice visual fidelity for interactive performance, while image generation using path-tracing techniques can be exceedingly time-consuming. In this article, we introduce RenderGAN, a deep learning-based solution designed to address this critical challenge in real-time rendering. RenderGAN uses G-Buffers and information from a real-time rendering engine as inputs to produce output images with exceptional visual fidelity. Its encoder-decoder architecture, trained using the Generative Adversarial Network (GAN) framework with perceptual loss, enhances image realism. To evaluate RenderGAN's effectiveness, we quantitatively compare the generated images with those of a path-tracing engine, obtaining a remarkable Universal Image Quality Index (UIQI) value of 0.898. RenderGAN's open source nature fosters collaboration, driving advancements in real-time computer graphics and rendering techniques. By bridging the gap between real-time and path-tracing rendering, RenderGAN opens new horizons for accelerated image generation, inspiring innovation and unlocking the full potential of real-time visual experiences. Project page: https://github.com/marcomameli1992/RenderNet
RenderGAN: Enhancing Real-time Rendering Efficiency with Deep Learning / Mameli, Marco; Paolanti, Marina; Mancini, Adriano; Zingaretti, Primo; Pierdicca, Roberto. - In: ACM TRANSACTIONS ON MULTIMEDIA COMPUTING, COMMUNICATIONS AND APPLICATIONS. - ISSN 1551-6857. - 21:3(2025), pp. 99.1-99.22. [10.1145/3712263]
RenderGAN: Enhancing Real-time Rendering Efficiency with Deep Learning
Mameli, Marco;Paolanti, Marina;Mancini, Adriano;Zingaretti, Primo;Pierdicca, Roberto
2025-01-01
Abstract
In the domain of computer graphics, achieving high visual quality in real-time rendering remains a formidable challenge due to the inherent time-quality tradeoff. Conventional real-time rendering engines sacrifice visual fidelity for interactive performance, while image generation using path-tracing techniques can be exceedingly time-consuming. In this article, we introduce RenderGAN, a deep learning-based solution designed to address this critical challenge in real-time rendering. RenderGAN uses G-Buffers and information from a real-time rendering engine as inputs to produce output images with exceptional visual fidelity. Its encoder-decoder architecture, trained using the Generative Adversarial Network (GAN) framework with perceptual loss, enhances image realism. To evaluate RenderGAN's effectiveness, we quantitatively compare the generated images with those of a path-tracing engine, obtaining a remarkable Universal Image Quality Index (UIQI) value of 0.898. RenderGAN's open source nature fosters collaboration, driving advancements in real-time computer graphics and rendering techniques. By bridging the gap between real-time and path-tracing rendering, RenderGAN opens new horizons for accelerated image generation, inspiring innovation and unlocking the full potential of real-time visual experiences. Project page: https://github.com/marcomameli1992/RenderNet| File | Dimensione | Formato | |
|---|---|---|---|
|
Mameli_RenderGAN_VoR_2025.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Creative commons
Dimensione
1.26 MB
Formato
Adobe PDF
|
1.26 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


