Game development presentations - a useful reference

Ray Tracing Essentials Part 6: The Rendering Equation
April 22, 2020
In Part 6: NVIDIA’s Eric Haines describes the ray tracing rendering equation. Arguably the most important equation in realistic computer graphics is The Rendering Equation. In this talk we show this equation and explain each term. Pure tracing will always eventually give the right answer, but the key word is “eventually.” By using better sampling strategies, shooting rays where they can do the most good, we can dramatically cut rendering times. Doing so lets us use path tracing for even interactive games, such as Quake II.
 
Rendering Millions of Dynamic Lights in Real-Time
May 14, 2020

The NVIDIA research collaboration with the Visual Computing Lab at Dartmouth College allows direct lighting from millions of moving lights with today’s ray budgets. The approach requires no complex light structure, no baking, and no global scene parameterization, yet gives results up to 65x faster than prior state of the art. All lights cast shadows, everything can move arbitrarily, and new emitters can be added dynamically.

The paper, “Spatiotemporal Reservoir Resampling for Real-Time Ray Tracing with Dynamic Direct Lighting,” provides theoretical and implementation details for the reservoir-based spatiotemporal importance resampling (ReSTIR) technique.
https://news.developer.nvidia.com/rendering-millions-of-dynamics-lights-in-realtime/
 
Gears 5 – High-Gear Visuals On Multiple Platforms
Mike Perzel (PC Rendering Lead - The Coalition)

Chris Wallis (Rendering Engineer - The Coalition)
Jordan Logan (AMD)
Bringing the Gears franchise to the PC is not something the Coalition takes lightly. The PC cannot be a port. It has to be something that meets or exceeds the level of the console products to ensure the fans get the same great experience from the game no matter how they want to play.

This talk will discuss Direct3D® 12 in general, as well as some of the features that were leveraged to accomplish this goal, such as Async Compute, Tiled Resources, Debugging, Copy Queues, and HDR.


https://gpuopen.com/video-gears-5/
 
On Ray Reordering Techniques for Faster GPU Ray Tracing
May 5, 2020
We study ray reordering as a tool for increasing the performance of existing GPU ray tracing implementations. We focus on ray reordering that is fully agnostic to the particular trace kernel. We summarize the existing methods for computing the ray sorting keys and discuss their properties. We propose a novel modification of a previously proposed method using the termination point estimation that is well-suited to tracing secondary rays. We evaluate the ray reordering techniques in the context of the wavefront path tracing using the RTX trace kernels. We show that ray reordering yields significantly higher trace speed on recent GPUs (1.3 − 2.0 ×), but to recover the reordering overhead in the hardware-accelerated trace phase is problematic.
https://dl.acm.org/doi/fullHtml/10.1145/3384382.3384534
 
Implementing Stochastic Levels of Detail with Microsoft DirectX Raytracing
June 15, 2020
Level-of-detail (LOD) refers to replacing high-resolution meshes with lower-resolution meshes in the distance, where details may not be significant. This technique can help reduce memory footprint and geometric aliasing. Most importantly, it has long been used to improve rasterization performance in games. But does that apply equally to ray tracing?
...
With this post and the accompanying sample code, we showed a straightforward, discrete LOD mechanism that is in use in various shipping games. We showed that, depending on the situation, the speedups delivered by this mechanism can be significant. We also demonstrated one possible way to implement a fully hardware-accelerated, stochastic LOD approach in DXR that significantly reduces the popping artifacts of discrete LOD with only a small impact on performance.
https://devblogs.nvidia.com/implementing-stochastic-lod-with-microsoft-dxr/
 
Awesome :D
I expected a big hit due to divergence, and probably it would be if they would use more than just one model. But seems a simple solution and practical. :)

Edit:
I felt a bit dumb at first. Why did i miss this simple solution and kept complaining?
Not sure, but..
With this solution, the whole path has to use the same lod.
Because we start the path from the first hit we get from screenspace, we know distance and select this lod from that. Seems fine.
But what happens if the ray travels a long distance through the whole scene? The distant meshes would not be available at all, because they only have 2 lods either higher or lower.

So, do i miss something? Does this work only because they use only one model and they have all lod instances at every place?
 
Last edited:
Creating Optimal Meshes for Ray Tracing
June 22, 2020
When you are creating triangle meshes for ray tracing or reusing meshes that have been successfully used in rasterization-based rendering, there are some pitfalls that can cause surprising performance issues.

Some mesh properties that have been acceptable in rasterization can be problematic in ray tracing or require specific handling to work as expected. This post reveals those pitfalls and gives practical advice to get around them.
  • Avoid elongated triangles in meshes
  • Rebuild deformable meshes when needed
  • Be careful with degenerate triangles
  • Merge and split meshes judiciously
  • Optimize alpha tested meshes
https://developer.nvidia.com/blog/creating-optimal-meshes-for-ray-tracing/
 
With this solution, the whole path has to use the same lod. Because we start the path from the first hit we get from screenspace, we know distance and select this lod from that. Seems fine.

But what happens if the ray travels a long distance through the whole scene? The distant meshes would not be available at all, because they only have 2 lods either higher or lower.

So, do i miss something? Does this work only because they use only one model and they have all lod instances at every place?

You’re right. The proposed approach would only work for relatively short secondary rays. For objects very far away both LODs will be ignored. E.g. when looking at a mirror with a building far in the distance behind you that building would be missing in the reflection.

DXR supports specifying an 8-bit instance mask at TLAS build, which is then combined with a per-ray mask to determine whether an instance should be tested for intersection or ignored. Specifically, the two masks are ANDed together. If the result is zero, the instance is ignored.
 
DXR supports specifying an 8-bit instance mask at TLAS build, which is then combined with a per-ray mask to determine whether an instance should be tested for intersection or ignored. Specifically, the two masks are ANDed together. If the result is zero, the instance is ignored.
Thanks, i've missed that 'ignoring' detail.
Also my context when thinking about LOD always is huge open world. But the technique is probably more aimed at characters / dynamic objects LOD.

Check out 02:54
Probably he could completely hide the phasing cascade switches by making the blending range over the whole visible range of the cascade. This reduces detail to one half, but i would prefer this over the phasing.
 
Promising, but I was intrigued by when he shows the alledged sdf representation of the scene, and it looks just as blocky as a bog standard marched voxel volume. It was as blocky and sawtoothy as a voxel representation with rounded corners. I'm not sure he is using actual SDF volumes.


Check out 02:54

It's a neat newer version of cone tracing, I follow Godot's dev on twitter and he's been talking about it. Solid results for how cheap it is and how little light leak happens, especially considering there's no temporal component. He wants to keep motion vectors out of the code to keep it simple for people to modify the graphics code for themselves.
 
Back
Top