Game development presentations - a useful reference

Graphics Programming Conference 2024. 19 session videos.


The Graphics Programming Conference is a three day event in November. We have keynotes from industry experts, presentations, and “masterclass” workshops with the ability to dive into specific technology in a hands-on environment.
 
Wonder if Intel will actually push this. Until NVIDIA decides to push extrapolation or frameless rendering themselves they likely won't take kindly to devs doing it, which will make it hard to get it into games. Maybe if Valve launches Index 2 they could be convinced to show-piece it? VR has the greatest need.
Could they push ExtraSS as their engine-integrated solution to devs while introducing GFFE as a driver-level feature like AMD Fluid Motion Frames?

Edit: After reading further, probably not.
Note that we use the term "G-buffer free" to refer to the absence of G-buffers for extrapolated frames only. The depth buffer and motion vectors for rendered frames are used since they are usually readily available in the rendering engine without additional cost
 
The game needs to render a slightly larger viewport to cover expected de-occluded pixels along the edges, also you want the view matrix for the frames and for the game engine to give the view matrix for the desired extrapolated frame based on real time user input (no need to dead reckon that, reading input is cheap to do at high FPS). It needs very little extra work in the game engine, but it does need a little.

PS. they say camera pose instead of view matrix, same difference.
 
I'm really enjoying the lighting in Path of Exile 2, and the game runs great. Radiance Cascades seems like a really interesting approach. I believe that Alexander Sannikov at Grinding Gear Games is the inventor of this solution. It seems like there are a lot of people getting interested in radiance cascades, so it'll be cool to see where it goes.





Here's a video I took showing radiance cascades in Path of Exile 2.



I'm really curious to see if any works gets done to improve it with off-screen rays. Either hardware ray-tracing or some kind of sdf proxy like lumen. Right now the main issue in 3D is having lighting disappear when the source is occluded.

Edit: The original paper
 
Last edited:
I'm really curious to see if any works gets done to improve it with off-screen rays. Either hardware ray-tracing or some kind of sdf proxy like lumen. Right now the main issue in 3D is having lighting disappear when the source is occluded.
It has some interesting properties, but I think it's far from obvious how it would be practical in a 3D world, beyond just the screen space traces. As the paper notes:

For example, even though storing the ”tail” of a full 3d radiance field takes only as much space as storing its cascade 0, just storing a cascade 0 is practically equivalent to voxelizing the entire scene – this is often a ”dealbreaker” for large-scale scenes.
...
However, it is unclear how to calculate world space radiance intervals anywhere near as efficiently as by raymarching in screenspace.
There's really only a single example in the paper in a 3D world, and the results are obviously far too quantized even in a small toy scene. The author notes that hierarchical methods could be applied, but history has shown that "just use sparse voxels!" is far from a silver bullet for the N^3 scaling of world space voxelization.

I do think that it's worth noting that the property of encoding high spatial resolution near contact points and shifting to high angular frequency in that far field is extremely desirable, not just for GI but even for just for shadows (and indeed this work appears to follow on some of the earlier work that was shadow-specific in earlier works from the author).

My main concern here though is that the data structure is optimizing a bit too much for sampling when the "how to generate it" part is far from obvious. The claims in the introduction about it not needing temporal reprojection and being able to fully recompute the structure every frame would appear to only apply to the flatland/2D case as implemented in PoE2, as even with all the handwaving about 3D the few examples they give they also mention using various amounts of temporal reuse to achieve. And indeed if they didn't the claims would be impossible - there's no getting around the numbers of rays required in arbitrary scenes with specular reflections. I vaguely even remember a paper that showed how to formulate arbitrary computation as path tracing problems in a scene with carefully positioned mirrors.

Already we're trending towards the cost of maintaining the various world space data structures being the main issue, beyond the actual tracing. Obviously both are important, but even if raytracing was infinitely fast, a lot of current games would not even get that much faster or prettier (although we could certainly get rid of some amount of ghosting).

Thanks for the link though - interesting to read through and definitely seems like a good tradeoff for PoE2 and similar type games.
 
Already we're trending towards the cost of maintaining the various world space data structures being the main issue, beyond the actual tracing. Obviously both are important, but even if raytracing was infinitely fast, a lot of current games would not even get that much faster or prettier (although we could certainly get rid of some amount of ghosting).
RT power would mean more rays so less noise and sampling artefacts. I guess that's the balance. RT resolving power is all about tracing rays and getting more info. World updating is all about ray traceable detail. One without the other would either mean fabulous traced scene of simple, inaccurate geometry, versus very sparse and noisy sampling of accurate scene representation.
 
Radiance Cascades seems like a really interesting approach.
The naive radiance cascade leaks all over the place because the interpolation of higher cascades while conceptually feel good, is a really poor approximation. They have a "bilinear fix" for that where they trace from the lower cascade interval end to the higher cascade interval starts, but that destroys a bit of the elegance.

AFAICS they don't do correct hemicircular sampling. The cascade 0 intervals are not equally distributed around the surface normal, so they sample the wrong hemicircle after fanout.

I can't escape the feeling that the concept of a hierarchy of probes with increasing angular resolution can have better implementations than this. Working on a SVO rather than doing actual ray tracing/casting to determine the intervals ... maybe even with neural networks :)

There's really only a single example in the paper in a 3D world, and the results are obviously far too quantized even in a small toy scene. The author notes that hierarchical methods could be applied, but history has shown that "just use sparse voxels!" is far from a silver bullet for the N^3 scaling of world space voxelization.

Yet it's not really different from "just BVH the entire scene".

PS. isn't Lumen voxel lighting kinda like radiance cascades?
 
Last edited:
RT power would mean more rays so less noise and sampling artefacts.
Yes although on the GI side you also need to shade the rays at some frequency. Diffuse GI is - as always - the "easy"/cheap case here while with specular there's a limit to how much cheating you can do. I think a lot of the tradeoffs really do come down to how you can cheat/approximate the theoretically unbounded specular cases in a way that tends to work with a given set of content, because basically all real-time techniques break down spectacularly in various situations right now; places where offline can still just brute force a zillion rays.
 
Where does shader LOD on rays fit into the pipeline? Are simplified shaders part of the surface's material, or are they substituted on a ray evaluation? Does RT lead to more shaders, or more complex shaders?
 
Where does shader LOD on rays fit into the pipeline? Are simplified shaders part of the surface's material, or are they substituted on a ray evaluation? Does RT lead to more shaders, or more complex shaders?
Depends a ton between the implementations, but very often the shading of the RT scene is simpler. In games that do mostly diffuse GI it's often not even shaded, it's just a flat color. The more specular, the more you have to care, but you'll notice in most games that both geometry and shading are still quite simplified on the RT side.

RT shaders are handled differently and in various ways from primary shading. In the naive DXR 1.0-style "hit shaders", you collect all the shaders for anything that might possibly be hit by a ray up into a kind of ubershader/uber-PSO that the hardware can invoke depending on the hit. Unfortunately, it does have many of the negative properties of ubershaders on GPUs as well, namely having a single shader that needs a lot of registers or ray payload will affect the performance of all raytracing, even if that shader is never invoked. Because of this, many renderers separate out the shading of hits (similar to deferred shading for primary rays) and use just simple hit/miss ray queries, which works well for some cases and poorly for others. Sadly the answer to most of these sorts of questions right now is usually "all of the above" for any suitable complex engine :S
 
Just did a quick check and it seems like enabling GI in Path of Exile 2 costs about 1.6ms at 1440p native on my RTX 3080 (200 fps -> 150 fps) in the scene I was looking at. It's pretty cheap, and I think the game looks great. That said, screen-space issues are present, but overall it's a nice effect. I think I've played games with AO that looks worse and costs about as much.

That was with GI and shadows on high, not the default low. Ultra has a much bigger hit.
 
Last edited:
played games with AO that looks worse and costs about as much.
Please note ball lightning doesn't act a light source (for example) and terrain AO might miss the mark too much even if it's ideal scenario for SS. I'd still prefer Carmack's idea how they could ve shipped Rage with seperate diffuse and swap it on the fly and claim no down side to real time lighting.
 
Back
Top