Game development presentations - a useful reference

Abstract:
The sphere tracing algorithm provides a fast and high-quality strategy for visualizing surfaces encoded by signed distance functions (SDFs), which have become a centerpiece in a wide range of visual computing algorithms. In this paper we introduce a sphere tracing algorithm for a completely different class of functions, harmonic functions, opening up a whole new set of possibilities. Harmonic functions are found throughout geometric and visual computing, where they arise naturally as the solution to interpolation problems, and in the physical sciences, where they appear as solutions to the Laplace equation. Our algorithm for harmonic functions is similar in spirit to the sphere tracing algorithm for SDFs: by using a conservative lower bound on the distance to the level set, we can take much larger steps than with naïve ray marching. Our key observation is that for harmonic functions such a bound is given by Harnack's inequality. Unlike Lipschitz bounds used in traditional sphere tracing, this Harnack bound requires only the value of the function at a point—we use this bound to develop a sphere tracing algorithm that can also handle jump discontinuities arising in angle-based harmonic functions. We show how this algorithm can be used to directly visualize smooth surfaces reconstructed from point clouds (via Poisson surface reconstruction) or polygon soup (via generalized winding numbers) without performing linear solves or mesh extraction. We also show how it can be used to render nonplanar polygons (including those with holes), and to visualize key objects from mathematics, including knots, links, spherical harmonics, and Riemann surfaces.
 
Advances in Real-Time Rendering in Games 2024 Slides
  • Neural Light Grid: Modernizing Irradiance Volumes with Machine Learning - Michał Iwanicki (Activision)
  • Seamless Rendering on Mobile: The Magic of Adaptive LOD Pipeline - Shun Cao (Tencent Games)
  • Flexible and Extensible Shader Authoring in Frostbite with Serac - Simon Taylor (EA | Frostbite)
  • Announcing The Call of Duty Open-Source USD Caldera Data Set - Michael Vance (Activision)
  • Variable Rate Shading with Visibility Buffer Rendering - John Hable (Visible Threshold)
  • Shipping Dynamic Global Illumination in Frostbite - Diede Apers (EA | Frostbite)
  • Hemispherical Lighting Insights from the Call of Duty Production Lessons - Thomas Roughton (Activision)
  • Achieving scalable performances for large scale components with CBTs - Anis Benyoub (Intel), Jonathan Dupuy (Intel)
 
It's an interesting, if not exactly new, observation. Cascaded resolution has been around for a while, inverting the angular/linear is a solid idea buuut problems arise with smooth representations of say, perfectly specular materials. Take say, a mirror and you see both a continuous linear and angular representation, shove it through this caching seem and you'll see discontinuities as the representation "jumps" in between cache points. For their type of content, isometric top down mid range camera, it's probably not much of a concern.
Bit of a necro, but I only just noticed Radiance Cascades.

Reflection is an extreme more easily handled by direct raytracing.

Biggest problem I see is that the radiance intervals all need to be traced and in 3D there's a lot of them. Most of the intervals are empty, so it isn't that bad, but HWRT is an incredibly poor fit. You need something like a SVO or a SV64 filled with low LOD proxies.
 
How do GPUs work? Branch Education dropped a new video to help explain.

I finally got round to watching this video, to see if I could recommend it at work for non-graphics folks. It has excellent animation, and along with the explanation you get a good basic tour of the high-level of what's going on, especially for those new to the makeup of a modern GPU. It's a short-form video explainer, so I can't really complain about any of the inaccuracies or lack of detail it has, and so overall it's a good intro for those beginning their GPU hardware journey.
 
This looks excellent:


It's in 10 parts and seems reasonably accessible conceptually despite the code.

The author of that series just released a github project: https://github.com/jbikker/tinybvh

tinybvh​

Single-header BVH construction and traversal library written as "Sane C++" (or "C with classes"). The library has no dependencies.

Version​

The current version of the library is a 'prototype'; please expect changes to the interface. Once the interface has settled, more functionality will follow. Plans:
  • Spatial Splits (SBVH)
  • Conversion to GPU-friendly format
  • Collapse to 4-wide and 8-wide BVH
  • Optimization of an existing BVH using treelet reinsertion
  • OpenCL traversal example
  • 'Compressed Wide BVH' data structure (CWBVH)
  • Efficient CWBVH GPU traversal
  • TLAS/BLAS traversal with BLAS transforms

These features have already been completed but need polishing and adapting to the interface, once it is settled. CWBVH GPU traversal combined with an optimized SBVH provides state-of-the-art #RTXOff performance; expect billions of rays per second.
 
Tried to search for this and didn't see it come up


Here's the github link to their fast noise generator

I wonder how this would work with AI antialiasing/denoising/upscaling. My understanding of this paper is that the best results were achieved by optimizing the sampling pattern to complement the specific denoiser used. Could the sampling pattern be optimized for a specific AI denoising model, or could the model be trained to denoise a specific sampling pattern? I doubt it would be practical for developers to try and optimize their sampling pattern for existing closed-source black box models like DLSS 3.5, but Nvidia itself could supply the sampling pattern in their SDK. Alternatively, if the graphics APIs are updated to give developers access to the tensor cores (and equivalents on non-Nvidia HW), developers could create their own AI denoiser and sampling pattern optimized to complement each other.
 
Back
Top