Game development presentations - a useful reference

https://research.nvidia.com/publication/2019-07_Dynamic-Many-Light-Sampling

Dynamic Many-Light Sampling for Real-Time Ray Tracing

Monte Carlo ray tracing offers the capability of rendering scenes with large numbers of area light sources---lights can be sampled stochastically and shadowing can be accounted for by tracing rays, rather than using shadow maps or other rasterization-based techniques that do not scale to many lights or work well with area lights. Current GPUs only afford the capability of tracing a few rays per pixel at real-time frame rates, making it necessary to focus sampling on important light sources. While state-of-the-art algorithms for offline rendering build hierarchical data structures over the light sources that enable sampling them according to their importance, they lack efficient support for dynamic scenes. We present a new algorithm for maintaining hierarchical light sampling data structures targeting real-time rendering. Our approach is based on a two-level BVH hierarchy that reduces the cost of partial hierarchy updates. Performance is further improved by updating lower-level BVHs via refitting, maintaining their original topology. We show that this approach can give error within 6% of recreating the entire hierarchy from scratch at each frame, while being two orders of magnitude faster, requiring less than 1 ms per frame for hierarchy updates for a scene with thousands of moving light sources on a modern GPU. Further, we show that with spatiotemporal filtering, our approach allows complex scenes with thousands of lights to be rendered with ray-traced shadows in 16.1 ms per frame.

https://research.nvidia.com/sites/default/files/pubs/2019-07_Dynamic-Many-Light-Sampling//MPC19.pdf
 
http://advances.realtimerendering.com/s2019/index.htm

The program of advances in realtime rendering conference next monday the 29th july 2019


Advances in Real-Time Rendering in Games: Part I


9:00 am

Natalya Tatarchuk (Unity Technologies)

Welcome and Introduction


9:10 am

Steve McAuley (Ubisoft)

A Journey Through Implementing Multiscattering BRDFs and Area Lights


10:10 am

Anis Benyoub (Unity Technologies)

Leveraging Real-time Ray Tracing To Build A Hybrid Game Engine


11:10 am

Sebastian Tafuri (EA | Frostbite)

Strand-based Hair Rendering in Frostbite


11:40 am

Yury Uralsky (NVIDIA)

Mesh Shading: Towards Greater Efficiency Of Geometry Processing



12:10 pm

Natalya Tatarchuk (Unity Technologies)

Part I Closing Q&A

Advances in Real-Time Rendering in Games: Part II


2:00 pm

Natalya Tatarchuk (Unity Technologies)

Welcome (And Welcome Back!)


2:05 pm

Sean Feeley (Sony Santa Monica)

Interactive Wind and Vegetation in 'God Of War'


3:05 pm

Huw Bowles (Electric Square)

Multi-resolution Ocean Rendering in Crest Ocean System


4:05 pm

Fabian Bauer (Rockstar)

Creating the Atmospheric World of Red Dead Redemption 2: A Complete and Integrated Solution


5:05 pm

Natalya Tatarchuk (Unity Technologies)

Advances 2019 Closing Remarks
 
Last edited:
Last edited:
@JoeJ the one presentation mentions wanting LOD in DXR! :D
Yeah, i'm surely not the only one requesting this. But how should it look like? I think it's mainly two related questions:

1. Do we finally want / need continuous LOD?
I say yes. E.g. the image in one presentatio0n shows a edgy shadow of a distant round object with low tessellation.
As we approach more realism in lighting, the shadow will be soft and setting LOD just by distance (ignoring shadows or reflections) would be fine.
But a hard transition between fixed LODs would still cause popping, and with realistic lighting this could become visible in shadows, reflections and GI. Not sure how hard transitions affect denoising as well.
Further continuous LOD would help to distribute details to screen better. Actually we often see low details close up, but higher detail than necessary in the distance. Static level geometry often has no LOD at all.

2. Do we want compatibility with Mesh Shaders?
Curious if this question could be the motivation for NV to support LOD for RT quickly, but not sure if it makes so much sense.
If we need to store the geometry in main memory anyways to support RT, it might be better to just write our own geometry processing with compute, update in a time sliced manner and use it for both RT and rasterterization, without a need for mesh / tessellation shaders?
But the approach "mesh shading -> raster and generate BVH automatically" could make sens too. IDK.


I wonder how far we could get already now on current hardware.
What i want is to have static BVH, enlarging the bounds so any continuous or hard change of contained triangles is possible without recalculating BVH from scratch.
This would require a dynamic number of triangles per node, and the ability to disable BVH nodes at the bottom when decreasing LOD.
FF units for faster BVH generation would not be necessary.
 
Vulkan Sessions SIGGRAPH 2019
There is also the Vulkan Ray-Tracing work that is ongoing across vendors. The Vulkan Ray-Tracing work is being based on the existing VK_NV_ray_tracing extension from NVIDIA but with various additions and other changes to make it more applicable for the different software and hardware vendors. Both real-time ray-tracing and hybrid ray-tracing should be possible with this upcoming support.

Helping out for adoption is Vulkan Ray-Tracing is looking at "substantial compatibility" with Microsoft DirectX Ray-Tracing (DXR) while keeping to Vulkan's design. Vulkan Ray-Tracing will debut when two different conformant implementations have been written.

Vulkan Raytracing TSG update: Yuriy O'Donnell (Epic Games)
24:30
OctaneRender, Light Field Displays, RenderToken Network (RNDR)
49:10 Jules Urbach (Otoy)
 
Back
Top